Christopher Nolan Shares Biggest Danger of AI

Christopher Nolan Biggest Danger of AI

In an interview with Wired discussing his upcoming film Oppenheimer, Christopher Nolan shared his perspective on the issue of Artificial intelligence, which has rapidly advanced in recent months, causing concern among many. 

Nolan believes that the growing use of AI, particularly in weapon systems, has been a cause for alarm for years, although it received little attention until now. He sees a problem with the term “algorithm” and how companies employ algorithms and AI to evade accountability for their actions. 

“The growth of AI in terms of weapons systems and the problems that it is going to create have been very apparent for a lot of years, “Few journalists bothered to write about it. Now that there’s a chatbot that can write an article for a local newspaper, suddenly it’s a crisis.”

Christopher Nolan Shares Biggest Danger of AI

Nolan’s main concern with AI lies in how we perceive its capabilities. If we view AI as all-powerful, we risk absolving ourselves of responsibility for our actions, whether in military or socioeconomic contexts. “If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions—militarily, socio­economically, whatever,” 

He warns against attributing godlike characteristics to AI, as it can lead to a dangerous tendency to relinquish accountability. Throughout history, humans have tended to create false idols and claim godlike powers because of their own creations. Nolan finds the mythological underpinnings of this phenomenon intriguing.

“The biggest danger of AI is that we attribute these godlike characteristics to it and therefore let ourselves off the hook. I don’t know what the mythological underpinnings of this are, but throughout history there’s this tendency of human beings to create false idols, to mold something in our own image and then say we’ve got godlike powers because we did that.”

While Nolan acknowledges the real danger associated with AI, he emphasizes that the genuine peril lies in the abdication of responsibility.

“I feel that AI can still be a very powerful tool for us. I’m optimistic about that. I really am,” he said. “But we have to view it as a tool. The person who wields it still has to maintain responsibility for wielding that tool. If we accord AI the status of a human being, the way at some point legally we did with corporations, then yes, we’re going to have huge problems.”

In the LA Times, an interesting article discussed ChatGPT and OpenAI, The author described it as a proposition for sale since OpenAI has transitioned into a private company. The article raised the concern that such powerful technology should not be readily available to the public. However, the allure of this dangerous tool has only increased due to its restricted accessibility.