🚀 Gate.io #Launchpad# for Puffverse (PFVS) is Live!
💎 Start with Just 1 $USDT — the More You Commit, The More #PFVS# You Receive!
Commit Now 👉 https://www.gate.io/launchpad/2300
⏰ Commitment Time: 03:00 AM, May 13th - 12:00 PM, May 16th (UTC)
💰 Total Allocation: 10,000,000 #PFVS#
⏳ Limited-Time Offer — Don’t Miss Out!
Learn More: https://www.gate.io/article/44878
#GateioLaunchpad# #GameeFi#
Author of "A Brief History of Humanity": AI is becoming a threat, it has broken the operating system of human civilization
Fear of AI has plagued humanity since the computer age. Previously, the main fear was that machines would physically kill, enslave, or replace humans. Over the past few years, new AI tools have emerged, threatening the survival of human civilization in an unexpected way: **AI has acquired some extraordinary abilities to process and generate language (whether through text, sound or images) , and thus has broken through the operating system of human civilization. **
Language is the building block of almost all human cultures. For example, human rights are not inscribed in human DNA, but cultural artifacts we create through telling stories and making laws; gods do not exist physically, but cultural artifacts we create through creating myths and writing scriptures.
Currency is also a cultural artifact. Banknotes are nothing more than colorful pieces of paper, and more than 90% of money today is not even paper money, but digital information in computers. What gives the currency its value are the stories that bankers, treasury ministers and crypto experts tell us about it: FTX founder Sam Bankman-Fried, fake "blood test for cancer" Elizabeth Holmes, the world's leading businesswoman, and Bernie Madoff, the mastermind of the largest Ponzi scheme in history, are neither very good at creating real value, but they are both very good at speaking story.
What if non-human agents were better than ordinary humans at telling stories, creating melodies, drawing images, and writing laws and scriptures?
When people think of ChatGPT and other new AI tools, they often focus on examples of elementary and middle school students using AI to write essays. What would happen to the school system if kids did this? But these kinds of questions miss the point. Forget about school essays, think about the next U.S. presidential race in 2024, and try to imagine how AI tools might be used to create massive amounts of political content and fake news.
In recent years, believers of "Anonymous Q" have gathered around anonymous revelations posted on the Internet (Editor's Note: "Anonymous Q" is an online gathering place for various conspiracy theories. The core conspiracy theory is that there is a conspiracy inside the US government deep state). Believers collect and promote such conspiracy theories, and regard them as bibles. In the past, all breaking news posts were written by humans, and machines only helped spread them. But in the future, we may see the first pagan religions in history whose scriptures were written by non-human agents. Throughout history, religions have claimed that their holy books were not of human origin. This may soon become a reality.
On a more day-to-day level, we may soon discover that we think we are harangues online with real people about abortion, climate change, or the Russia-Ukraine conflict, but the other party is actually an AI. The problem is that there is no point in spending our time trying to change the perception of an AI that can refine information with such precision that it is likely to affect our perception. **
**By mastering human language, AI might even develop an intimate relationship with humans and use the power of that relationship to change our perception and worldview. **While there is no indication that the AI has any consciousness or feelings of its own, for an AI to cultivate a false intimacy with a human, all it needs is for the human to become emotionally attached to it.
In June 2022, Google engineer Blake Lemoine publicly claimed that the AI chatbot LaMDA he was working on already had sentience. The controversial claim cost him his job. What's most interesting about this is not Lemoyne's statement (which may be untrue), but his willingness to risk losing a high-paying job in order to justify the name of the AI chatbot. If AI can make people risk their jobs for it, is it possible to induce people to do something else?
In the political struggle to win the hearts and minds of the people, closeness is the most effective weapon. And AI has just gained the ability to form intimate relationships with millions of people.
We all know that over the past decade, social media has become a battleground for people's attention. With the advent of a new generation of AI, the battle line is shifting from attention to intimacy. *If there is a competition between AI and AI who can have a closer relationship with humans, and then use this relationship to persuade us to vote for certain politicians or buy certain products, how will human society and human psychology change? *
Even without creating "false intimacy," new AI tools could have a huge impact on our perception and worldview. People might think of an AI advisor as an all-knowing one-stop god, so it's no wonder Google panicked. If you have any questions, you can ask the gods, why bother to search? The news and advertising industries are naturally afraid, since you can get the latest news just by asking the gods, why do you need to read newspapers? for what?
And even if these scenarios were conceived, it still failed to truly grasp the overall situation. When we discuss the possible end of human history, we are not talking about the end of history, but the end of the part of history dominated by human beings. History is the product of the interplay of biology and culture, of our biological needs and desires (such as food and sex) and cultural creations (such as religion and law). History is the gradual process by which law and religion influenced diet and sex.
What happens to the course of history when AI takes over culture and starts creating stories, melodies, laws, and religion? Previously, tools like the printing press and radio facilitated the spread of human cultural ideas, but they never created themselves new cultural concept. AI is fundamentally different from them. **AI can create new ideas, new cultures. **
At first, an incipient AI might mimic the humans who trained it. But over time, AI culture will boldly go where no human has gone before. For millennia, humans have lived in the dreams of other humans. **In the next few decades, we may find ourselves living the dream of non-human intelligent agents. **
Fear of AI has only plagued humanity for the past few decades. But a much deeper fear has haunted humanity for millennia. We've always understood the power of stories and images to manipulate the mind and create illusions. Therefore, humans have feared being trapped in a world of illusions since ancient times.
In the 17th century, Descartes worried that he might be trapped in a world of illusion by a demon, and everything he saw and heard was just set by this demon. Plato in ancient Greece told the famous allegory of the cave: a group of people have been chained in a cave for a lifetime, and there is only a blank cave wall in front of them, like a screen. The prisoners could see various shadows cast on the walls of the cave by the world outside the cave, so they took these illusions as reality.
In ancient India, Buddhist and Hindu sages pointed out that human beings live in maya (the world of illusion). What we usually perceive as reality is often just an illusion in our own minds. Human beings may wage wars, kill others, and willingly be killed because they believe in one vision or another.
**AI Revolution brings Descartes' Demon, Plato's Cave and Maya directly to us. **If we're not careful, we can become trapped behind a veil of illusion, unable to tear apart or even realize it's there.
Of course, the new power of AI may also be put to good use. I'm not going to go into too much detail on this, because people who develop AI have already said enough. The job of historians and philosophers like myself is to point out where the dangers of AI lie. But there is no doubt that AI can help humans in countless ways, from finding new treatments to overcome cancer, to discovering solutions to ecological crises, and more. The question before us is how to ensure that new AI tools are used for good rather than evil. To do this, we first need to recognize the true capabilities of these tools.
We've known since 1945 that nuclear technology can generate cheap energy for the benefit of humanity, but it can also physically destroy human civilization. So we reshaped the entire international order in order to protect humanity and ensure that nuclear technology is primarily used for the benefit of humanity. **Now we must deal with a new type of WMD that can destroy our spiritual and social worlds. **
We can still manage new AI tools, but we must act quickly. **Nuclear weapons cannot invent more powerful nuclear weapons, but AI can create exponentially more powerful AI. **
**The first critical step is to subject powerful AI tools to rigorous security checks before they are released into the public domain. ** Just as pharmaceutical companies cannot release new drugs without testing for short-term and long-term side effects, technology companies should not release new AI tools before ensuring safety. We need agencies like the US Food and Drug Administration (FDA) to regulate new technologies. These things should have been done long ago.
Wouldn’t slowing down the deployment of AI in the public sphere cause democracies to fall behind more reckless authoritarian regimes? Quite the opposite. Unregulated AI deployment would create social chaos that would favor dictators to undermine democracies. **Democracy is a dialogue, and dialogue depends on language. Once AI deciphers language, it could undermine our ability to have meaningful conversations, and thus democracies. **
We have just encountered a non-human intelligence on Earth, and know little about it, except that it may destroy human civilization. We should stop the irresponsible deployment of AI tools in the public domain and control AI before it controls us. And **my first regulatory proposal is to make it mandatory for AI to disclose that it is AI. **** If I can't tell if I'm talking to a human or an AI, that's the end of democracy. **