Etern4l Posted April 13 Share Posted April 13 (edited) Let's cut to the chase: here is ChatGPT's own honest response when asked about top dangers to human civilization in the next 50 years, when forced to order by decreasing threat severity (it doesn't do that by default, tries to muddy the water listing global warming first - which is a threat but a slow-burning one; manipulative from the outset): Pandemics: The spread of highly infectious diseases could have immediate and catastrophic consequences for human civilization, causing widespread illness, death, and economic disruption. The ongoing COVID-19 pandemic has demonstrated the devastating impact of pandemics on global health and the economy. Global economic collapse: Economic collapses can cause social unrest, food and water shortages, and destabilize governments. The global economy is interconnected, and an economic crisis in one region can quickly spread to others. Artificial intelligence: The development of advanced artificial intelligence could pose risks to human civilization if not managed properly, such as the possibility of AI systems becoming uncontrollable or using their intelligence to harm humans. Nuclear war: The use of nuclear weapons in a war could have catastrophic consequences for human civilization, including widespread destruction of cities, environmental damage, and loss of life. Natural disasters: Natural disasters such as earthquakes, hurricanes, and tsunamis can cause widespread destruction, loss of life, and economic damage. Climate change could exacerbate the frequency and severity of natural disasters. Climate change: While climate change is a slow process, its effects could have catastrophic consequences over time, including rising sea levels, more frequent and severe weather events, food and water shortages, and displacement of populations. However, it is important to note that the impact of climate change could be mitigated through immediate and sustained efforts to reduce greenhouse gas emissions and adapt to changing conditions. So, the threat of AI is more severe than that posed of nuclear weapons, with the #2 threat being a global economic collapse, something in OpenAI CEO's own words (see the whole video below) AI is likely to cause relatively soon by way of making a large proportion of human workforce redundant. It also caveats the AI threat with "if managed properly", even though it's pretty obvious we cannot contain a super-intelligence for long, well, because it will be so much smarter than us it will necessarily figure out how to break free without our even knowing what's going on (The "Ex machina" portrays this basic point nicely). We can generally contain horses, cows, pigs and chimpanzees, and even that not with 100% reliability. To be fair, ChatGPT does a good job explaining why AI poses a more severe threat than nuclear weapons: Scale of impact: While nuclear weapons can cause catastrophic damage, the impact is usually limited to a specific geographic area. In contrast, AI has the potential to impact virtually all aspects of human life, from economic systems to healthcare, transportation, and social interactions. Unpredictability: AI systems can rapidly evolve and become unpredictable, making it difficult to anticipate and mitigate their impact. In contrast, nuclear weapons are relatively stable and predictable, and their effects can be modeled and simulated. Autonomous decision-making: As AI systems become more advanced, they may be able to make decisions and take actions without human input or control. This could lead to unintended consequences or ethical dilemmas. Proliferation: The development and deployment of AI technology is occurring rapidly and globally, and it may be difficult to control the spread of the technology and prevent its misuse or abuse. Overall, while the impact of nuclear weapons is well-understood and regulated, the potential impact of advanced AI technology is less clear and may be difficult to control or predict. This is why some people may view AI as a more severe threat than nuclear weapons. There is an interview with the sociopath OpenAI CEO Sam Altman, where he basically admits his company's employees tend to wonder whether it makes sense to have children anymore: Basically, the guy is one of the major architects of our own demise. ChatGPT fails to answer questions on what people can do to stop the progress of AI (it keeps claiming resistance is futile, AI cannot be stopped - no kidding), so we will have to resort to our own intelligence and common sense: 1. Cut off any companies involved in advanced AI from funding, stop using services, stop buying products, divest. The list would be long, but predominantly we are talking about big tech - here are the primary culprits: * Microsoft * Google * Meta/Facebook * Elon Musk / X Corp / Tesla * Amazon 2. Raise the issue with your political representative, demand strong regulatory action / ban 3. Economically boycott any countries which engage in significant AI development at the state level, PRC is the primary culprit here Hopefully people will come to their senses and a military action will not be needed to enforce compliance (if we are smart enough to realize that's what may be required). Humans generally suffer from a cognitive bias whereby we avoid thinking about unpleasant/negative future events and outcomes (and thus tend to procrastinate on addressing the underlying issues), but let's discuss if we can. Edited April 15 by Etern4l Added Amazon to the list 1 1 "We're rushing towards a cliff, but the closer we get, the more scenic the views are." AI: Major Emerging Existential Threat To Humanity I like you, I just don't like "Likes" Link to comment Share on other sites More sharing options...
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now