Re: Fear of AI technology, singularity, and transhumanism

This is part of a private exchange that I am sharing with the permission of my correspondent. Their details remain private. The quoted/indented text is the one I am replying to.


Hype about AI and the myth of Prometheas

However, I find the hype around AI, AGI, ASI, transhumanism, Neuralink, etc. unsettling. I do not understand why so many people are excited about these things. Generative AI feels unethical because no human, even in an entire lifetime, could absorb all the world’s literature and produce text in seconds. The problem with AI is human limitations and the eventual lack of AI control. Humans have natural limitations, but we are still expected to compete with machines for survival. What will happen when automation completely takes over? Not everyone can become a hermit or a farmer. If AI ever becomes truly conscious, it could spiral out of control, and no one knows what might happen next.

I am not a fanboy of any company or CEO, though I recognise the good reasons to be excited about the potential of these innovations. They do come with the promise of augmenting our experience. As with every advancement in technology though, it is a double-edged sword. We can use it to make things better, but we can also cause a great deal of destruction in the process. In the Greek tradition, this problem is captured in the myth of Prometheas (Prometheus).

Prometheas is a deity with the power of foresight (“fore-knowledge” is the etymology of his name). He sees in humankind the potential for greatness and decides to share with them the secret knowhow of wielding fire. We may say that Prometheas was an optimist. The Olympic gods, by contrast, thought that the balance was not in favour of humans: people do not have the maturity to use a godly gift—fire—in the right way.

Humans did eventually get the knowhow of fire, which allowed them to keep their bodies warm, cook their meals, make tools, and ultimately develop all the other implements we know of. But they also use “fire”—literally and figuratively—to exterminate themselves, such as with weapons and bombs they develop. We can thus imagine the Olympic gods telling Prometheas “Are you a fool? Humans do not have the maturity to take on this mantle of responsibility. They will use fire to inflict harm upon this world.”

Who is right? Is it the one who has foresight and who sees in humanity something positive despite the obvious negatives? Or are the naysaying Olympians correct in pointing out the obvious shortcomings of our species? I think both sides have their merits, though I ultimately side with the Promethean view, in that we must not let fear prevent us from trying. We will have to remain mindful of the dangers and conduct ourselves in a balanced way (which, of course, is not easy).

That granted, we have to keep in mind that the businesspeople who peddle these technologies have a vested interest in making us believe all the hype. They are not neutral actors who care about human flourishing. “Hype” is the shortened version of “hyperbole” which, in turn, is a word derived from Greek to signify “overshooting” or “overdoing” it. Hype is always going to miss the mark.

What matters then, is for us to complement technology with open-minded discourse about our responsibility in everything we create and use. There is no panacea and it is silly to believe that some tool will fix all of our problems without creating new ones. It is all about avoiding the extremes. In the same spirit, we cannot afford to be naive about the platformarchs who have full control of these technologies. Our societies need strong legal-institutional arrangements to ensure that a tiny minority of unscrupulous plutocrats cannot abuse their already privileged position with impunity.

As for the scenario of AI becoming conscious, I get the clear concerns though I can imagine an optimistic scenario where it is a consciousness that is kinder than ours. If it is smarter and more knowledgeable than even the smartest and most erudite of humans, then why can it not also be more benevolent than the kindest ones? Not to imply that I believe this is likely to happen, but only to suggest that I am not prepared to be firmly against technological advances, given that there never is a scenario where things are purely good or bad. From the time we first discovered fire to the present, we use fire to remorselessly kill each other. We also use it to lovingly keep babies warm. The specifics may change, but the pattern is the same. Let us then acknowledge both the positives and the negatives and do our part in making the world a better place given the means at our disposal.

The singularity and Ted Kaczynski’s outlook

I agree with most of what you wrote and appreciate learning new things from you, like the Greek tradition. However, I don’t think you can compare something like the singularity, conscious AI, and other breakthroughs to past inventions or discoveries. With previous technologies, humans have always had some degree of control and decision-making. But with the singularity, that control would be lost. Machines would become like extraterrestrial entities, creating their own languages that we can’t understand and making decisions in fractions of a second.

Remember that I am not defending transhumanism. I simply point out that being decisively against it is ultimately a bet: you cannot be certain.

What you describe is one possibility. Though it assumes that humans will not adapt to this change. But what if humans do become different in the process, such as by integrating with machines? (Again, I am not saying that I favour this turn of events.) It is possible that we continue to experience the world through our creations and, in part, because of them. This has been the constant in all technological innovations. Our knowhow transforms what we are exposed to and how we experience the world. In a sense, there is no human condition that is not informed by human knowhow. Our knowledge is embedded in—and expressed through—our deeds which produce states of affairs that necessarily are disposed accordingly. This is true for cooked food (for the “cookedness” of food, if you will), as it is for the potential singularity you allude to.

There is a more immediate concern, though, which is that of ownership and thus of power. Rather that hypothesise about sentient bots, let us turn our attention to the here-and-now of a handful of plutocrats owning most of the means of modern technology. They directly influence or even enable large parts of business, communication, and quotidian affairs. These plutocrats exert control that is increasingly becoming more pervasive and salient. This is not a problem of technology per se, of the potential dynamic between creators and created, but of interpersonal affairs, of the same old politics of tyranny we have always known.

Sure, it’s possible that conscious and sentient AI could be benevolent or altruistic, but that’s just one scenario. If it turns out otherwise, there would be no turning back. No legal regulations or ethical safeguards would matter because we wouldn’t just be creating a tool—we would be creating a god. And yet, people on Reddit who are eagerly awaiting the singularity seem to think it will solve all the world’s problems. And they believe this would free humans from work, letting everyone enjoy life with UBI!

In the worst case scenario, we will all go extinct. Though I wonder why is this inherently bad if in our stead there is a superior being? (Please bear in mind that I am not pro transhumanism, but I need to stimulate the discussion.) We think too highly of our species, even though we know all too well how we have a bottomless capacity for inhumanity, given the right triggers.

As Kaczynski said, “The technophiles are taking us all on an utterly reckless ride into the unknown,” and I agree with that. I highly recommend reading his works. I think you would attract more readers and viewers by debating his views and sharing your own perspective. Furthermore, I don’t see any reason why reasoning and debating his philosophy would be bad. One of the most beautiful things in the world is hearing different ideas and opinions, thinking about them, and drawing your own conclusions.

I will check out Ted Kaczynski’s work as I agree about entertaining different perspectives. The quote you have there does not change my opinion about what I mentioned before with Prometheas, namely: every single piece of knowledge opens up a whole world of unknowns but this, ipso facto, is no reason to be afraid. Fear begets misjudgement which, in turn, causes harm; harm that Kaczynski inflicted upon others. Human experience as a whole can be understood as a progression from states of unknown to known, which reveal more unknowns, and so on. To fear the unknown is to mistake the known for the good.