Musk, Epstein, Chesterton and the quest for intelligence
There’s a recurring theme in the press these days, and its latest incarnation is Elon Musk speaking on record about his suspicion on our world being a cunning virtual world simulation game powered by artificial intelligence. You can also read in many places that the robots are taking over as well as the other advances made by Google and Microsoft’s deep learning technologies. I think it’s fascinating and I believe we will eventually reach a stage where the combined advances of robotics, automation, artificial intelligence, blockchain based platforms and tools will unleash both a very positive and negative potential in our societies. And we will have to adapt collectively to it. Industries will disappear. New ones will emerge. Most of the middle management will be washed away in less than twenty years. Fax machines and FM radio will still be around and popular. But that’s not what I want to discuss. What I would like to point out is my growing concern about the ideas that artificial intelligence will somehow replace human intelligence and that robots will become more powerful than our brains.
I’m of course not dismissing these notions straight away. But there’s a real myopic conundrum here and it just keeps the debate obscure and convoluted. To put it simply, the headlines these days are full of bombastic assertions that artificial intelligence will soon be more powerful than the human minds; that robots will rule over us and that humankind will be changed forever, not necessarily in a good way. I’m not necessarily saying these assertions are nonsense; I’m rather saying that these are complex matters that are quite misunderstood even by experts.
Just like Chesterton, it is worth pointing out that we’re getting insane by trying to reason within a frame of thought that seems right and accurate because it cannot really be proven wrong while the frame itself is really too reduced to enable proper and careful analysis of the matter. In this case, the famous psychologist Robert Epstein recently wrote in the Wired Magazine that everybody seems to reason about artificial intelligence becoming superior to the human brain by explaining the brain in information technology. The problem, as he’s showing, is that not only the human brain does not work at all like a computer, but even top-notch experts seem unable to use a different metaphor to explain their understanding of human brain. That is where the fundamental problem lies. The human brain may indeed not work like a computer; it may not use algorithms and may not really process data (that’s what brain specialists and neurosurgeons seem to say), but the real crux is that we do not seem to know how to move beyond the computer metaphor to explain how the brain works, and the brain just does not work like that metaphor. There were, of course, former cases of this issue. History has examples of thinkers explaining the brain in mechanical terms during the XVII & XVIIIth century. There’s the theory of humors and elements that has both mechanical and alchemical roots. Ultimately, none of these theories proved satisfactory. Different time, different metaphor; this time our theories just do not seem to be more relevant.
We should not be disappointed by that, I guess. But we should be wary of basing our understanding of complex matters such as artificial intelligence and robotics on a dysfunctional theory. Artificial intelligence will very likely be more powerful one day than the human brain… in terms of computing power. That is one of the core theories of Ray Kurzweil, whose books I’ve enjoyed reading for years. I agree with several of his predictions, especially when it comes to nanotechnology. But it does not make me a transhumanist. Anyway, when it comes to the brain, even Kurzweil applies a warped vision of its nature.
In this case, if we base our fundamental understanding of the brain on the notion of computing power we are relying on the computing metaphor and it is just not what the brain does or what defines us as humans. Now, I’m sure that relying on artificial intelligence has its sets of benefits and drawbacks, but we should not be afraid of its “computing power” vs our brain’s abilities. Computing power does not equate to intelligence, and artificial intelligence is primarily an artifice. It mimics intelligence, and very likely helps our understanding of complex matters. It does not create a smart, conscious sentient no matter the power and architecture of the processors it uses. Let me give a dummy example here. I use mu, the mail indexer, to search my emails in plain text on Emacs. In this case mu is built on Xapian. It’s a great technology. In fact its search capability and performance are amazing. My brain could not possibly sift and search through my heavy inboxes that fast. But neither Xapian nor mu are as powerful or complex as our brain. They do complement it and can interface with other tools that help highlight patterns and meaning in a maze of data. Our brains do not primarily do that.
Artificial intelligence may grow in complexity to encompass a broad range of human activity. But it will never really replace us. It is a mean, not an end; a tool, not an idol. Humans cannot become “slaves” of robots, no matter how “good” and “smart” they will be one day. Humans can, however, destroy themselves by the very power of their thinking (which machines don’t really do) and by their innate and unique ability called “free will”. The rest is a fascinating, yet all too vain heap of constructs, tools, and metaphors.
Leave a Reply