Jump to content
NotebookTalk

AI: Major Emerging Existential Threat To Humanity


Etern4l

Recommended Posts

Let's cut to the chase: here is ChatGPT's own honest response when asked about top dangers to human civilization in the next 50 years, when forced to order by decreasing threat severity (it doesn't do that by default, tries to muddy the water listing global warming first - which is a threat but a slow-burning one; manipulative from the outset):

 

  1. Pandemics: The spread of highly infectious diseases could have immediate and catastrophic consequences for human civilization, causing widespread illness, death, and economic disruption. The ongoing COVID-19 pandemic has demonstrated the devastating impact of pandemics on global health and the economy.

  2. Global economic collapse: Economic collapses can cause social unrest, food and water shortages, and destabilize governments. The global economy is interconnected, and an economic crisis in one region can quickly spread to others.

  3. Artificial intelligence: The development of advanced artificial intelligence could pose risks to human civilization if not managed properly, such as the possibility of AI systems becoming uncontrollable or using their intelligence to harm humans.

  4. Nuclear war: The use of nuclear weapons in a war could have catastrophic consequences for human civilization, including widespread destruction of cities, environmental damage, and loss of life.

  5. Natural disasters: Natural disasters such as earthquakes, hurricanes, and tsunamis can cause widespread destruction, loss of life, and economic damage. Climate change could exacerbate the frequency and severity of natural disasters.

  6. Climate change: While climate change is a slow process, its effects could have catastrophic consequences over time, including rising sea levels, more frequent and severe weather events, food and water shortages, and displacement of populations. However, it is important to note that the impact of climate change could be mitigated through immediate and sustained efforts to reduce greenhouse gas emissions and adapt to changing conditions.

 

So, the threat of AI is more severe than that posed of nuclear weapons, with the #2 threat being a global economic collapse, something in OpenAI CEO's own words (see the whole video below) AI is likely to cause relatively soon by way of making a large proportion of human workforce redundant.

 

It also caveats the AI threat with "if managed properly", even though it's pretty obvious we cannot contain a super-intelligence for long, well, because it will be so much smarter than us it will necessarily figure out how to break free without our even knowing what's going on (The "Ex machina" portrays this basic point nicely). We can generally contain horses, cows, pigs and chimpanzees, and even that not with 100% reliability.

 

To be fair, ChatGPT does a good job explaining why AI poses a more severe threat than nuclear weapons:

 

  1. Scale of impact: While nuclear weapons can cause catastrophic damage, the impact is usually limited to a specific geographic area. In contrast, AI has the potential to impact virtually all aspects of human life, from economic systems to healthcare, transportation, and social interactions.

  2. Unpredictability: AI systems can rapidly evolve and become unpredictable, making it difficult to anticipate and mitigate their impact. In contrast, nuclear weapons are relatively stable and predictable, and their effects can be modeled and simulated.

  3. Autonomous decision-making: As AI systems become more advanced, they may be able to make decisions and take actions without human input or control. This could lead to unintended consequences or ethical dilemmas.

  4. Proliferation: The development and deployment of AI technology is occurring rapidly and globally, and it may be difficult to control the spread of the technology and prevent its misuse or abuse.

Overall, while the impact of nuclear weapons is well-understood and regulated, the potential impact of advanced AI technology is less clear and may be difficult to control or predict. This is why some people may view AI as a more severe threat than nuclear weapons.

 

There is an interview with the sociopath OpenAI CEO Sam Altman, where he basically admits his company's employees tend to wonder whether it makes sense to have children anymore:

 

 

Basically, the guy is one of the major architects of our own demise.

 

ChatGPT fails to answer questions on what people can do to stop the progress of AI (it keeps claiming resistance is futile, AI cannot be stopped - no kidding), so we will have to resort to our own intelligence and common sense:

 

1. Cut off any companies involved in advanced AI from funding, stop using services, stop buying products, divest. The list would be long, but predominantly we are talking about big tech - here are the primary culprits:

 

* Microsoft

* Google

* Meta/Facebook

* Elon Musk / X Corp / Tesla

* Amazon

 

2. Raise the issue with your political representative, demand strong regulatory action / ban

 

3. Economically boycott any countries which engage in significant AI development at the state level, PRC is the primary culprit here

 

Hopefully people will come to their senses and a military action will not be needed to enforce compliance (if we are smart enough to realize that's what may be required).

 

Humans generally suffer from a cognitive bias whereby we avoid thinking about unpleasant/negative future events and outcomes (and thus tend to procrastinate on addressing the underlying issues), but let's discuss if we can.

Edited by Etern4l
Added Amazon to the list
  • Thumb Up 1
  • Thanks 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

"Resistance is futile, AI cannot be stopped" - straight out of a Terminator-style movie about AIs.  I think this "AI is an existential threat" is inspired by those movies, and over-hyped as a result.  And one of the data sets being fed into this AI is sci-fi movies and literature.

 

Once we have self-replicating giant death robots I'd bump the threat level up a few notches, but for now I'm not sure how a knowledge bot could cause societal collapse, other than perhaps contributing to the problem of misinformation on the Internet causing political crises, due to its woeful lack of regard for factual accuracy.

 

There are some product-level concerns about not being able to understand exactly how neural network software works, but at this point my "someone might die from this" concerns around that are largely around not understanding edge cases in Tesla Self Driving or similar systems.

 

So I'd order it:

 

1. Nuclear war.  Notably, the AI chat bot says they have localized impact, but that's only true if there are only one or a handful of nuclear devices used.  If Russia and NATO fire all of their nukes at each other, that will cause societal collapse, and will cause enough fallout to have significant repercussions in Asia as well.  Maybe the Aussies would be okay, but most of the world won't be.  And that possibility is a lot more likely than it was 15 years ago.

2. Climate change.  We're already almost certain to fail the 1.5ºC target, and could fail it by quite a bit.  How bad that looks is not entirely known, but until carbon capture and sequestration is proven, the worst case scenario is pretty bad, especially if we look past the traditional 2050 or 2100 targets at, say, 2200 or 2300.

3. Global Economic Collapse.  Can be a great destabilizer, and can cause wars.  Look at the 1930s.

4. Pandemics.  Obviously can be pretty bad, and if we get a 1348-style plague or if ebola finds a way to spread through the air, this could rapidly rise to the top spot.

5. AI.  Really, this is "the impacts of AI we don't fully understand."

6. Natural Disasters (other than climate-changed induced ones).  Localized impact.

 

And again, those AI scenarios are largely "over-delegating" issues.  Say the U.S. or China builds an army of Boston Dynamics-style autonomous combat robots, hundreds of thousands of them, and they misidentify their own country as the enemy.  Probably not the end of humanity (thanks, oceans), but could be bad.  Or, worse, a nuclear-connected AI system falsely detects incoming enemy nukes and decides to fire the missiles.

 

The latter is not a purely theoretical concern; in the mid-1980s Soviet radar systems at one of their nuclear-monitoring sites detected an incoming American missile and alerted the employees that they should retaliate.  Fortunately, the employee who was on duty applied some common sense, asked, "why would the Americans fire a single missile at us?  That makes no sense as a nuclear first strike", and didn't order a retaliatory strike, or report the incident to superiors who might order a retaliatory strike.  Obviously the incoming missile never arrived, and later investigation showed it was a software bug in the incoming-missile radar system, where it was erroneously categorizing atmospheric conditions as an incoming nuclear missile.

 

If you make that system fully autonomous, and the same thing happens, goodbye humanity.  But that could have happened in the 1980s, and also still counts for the "nuclear war" category.

  • Thumb Up 1

Desktop: Core i5 2500k "Sandy Bridge" | RX 480 | 32 GB DDR3 | 1 TB 850 Evo + 512 GB NVME + HDDs | Seasonic 650W | Noctua Fans | 8.1 Pro

Laptop: MSI Alpha 15 | Ryzen 5800H | Radeon 6600M | 64 GB DDR4 | 4 TB TLC SSD | 10 Home

Laptop history: MSI GL63 (2018) | HP EliteBook 8740w (acq. 2014) | Dell Inspiron 1520 (2007)

Link to comment
Share on other sites

1 hour ago, Sandy Bridge said:

1. Nuclear war.  Notably, the AI chat bot says they have localized impact, but that's only true if there are only one or a handful of nuclear devices used.  If Russia and NATO fire all of their nukes at each other, that will cause societal collapse, and will cause enough fallout to have significant repercussions in Asia as well.  Maybe the Aussies would be okay, but most of the world won't be.  And that possibility is a lot more likely than it was 15 years ago.

 

A lot there but this jumped out at me as I was reading.  There's no way even a small number of nuclear strikes causes only a localized impact, given how globalized the world economy and supply chain is right now.  I mean, if you look at the Russia/Ukraine conflict, which is certainly not a massive war involving many countries and has no nuclear devices in play (for now), you see the failure of Ukraine to export grain causing food prices to rise in many areas, and gasoline/energy prices on the rise as well as large parts of the world try to cut themselves off from Russian oil and natural gas.  Many people are fine and just have one more thing to complain about, but it could cause significant hardship to those less well off.

 

...A single nuclear strike would upend economic activity across a large area, both because of the effects of the blast and also how local people "sort of nearby" react to the situation, causing shocking repercussions worldwide even if they aren't directly related to the blast itself.  How OK would the Aussies be?  I know they wouldn't have to deal with the nuclear impact directly (fallout / environmental impacts), but surely they import tons of food and other things that would suddenly become harder to find and notably more expensive?  The inflation that we are experiencing right now would look like nothing.

 

 

There is a lot of benefit to be had from AI as well.  It could lead to increasingly rapid advances in many areas (computing, medicine, building and vehicle construction/design, ...).  An exciting time to live (but also sort of scary).

  • Thumb Up 1

Apple MacBook Pro 16-inch, 2023 (personal) • Dell Precision 7560 (work) • Full specs in spoiler block below
Info posts (Dell) — Dell Precision key postsDell driver RSS feeds • Dell Fan Management — override fan behavior
Info posts (Windows) — Turbo boost toggle • The problem with Windows 11 • About Windows 10 LTSC

Spoiler

Apple MacBook Pro 16-inch, 2023 (personal)

  • M2 Max
    • 4 efficiency cores
    • 8 performance cores
    • 38-core Apple GPU
  • 96GB LPDDR5-6400
  • 8TB SSD
  • macOS 14 "Sonoma"
  • 16.2" 3456×2234 120 Hz mini-LED VRR display
  • Wi-Fi 6E + Bluetooth 5.3
  • 99.6Wh battery
  • 1080p webcam
  • Fingerprint reader

Also — iPhone 12 Pro 512GB, Apple Watch Series 8

 

Dell Precision 7560 (work)

  • Intel Xeon W-11955M ("Tiger Lake")
    • 8×2.6 GHz base, 5.0 GHz turbo, hyperthreading ("Willow Cove")
  • 64GB DDR4-3200 ECC
  • NVIDIA RTX A2000 4GB
  • Storage:
    • 512GB system drive (Micron 2300)
    • 4TB additional storage (Sabrent Rocket Q4)
  • Windows 10 Enterprise LTSC 2021
  • 15.6" 3940×2160 IPS display
  • Intel Wi-Fi AX210 (Wi-Fi 6E + Bluetooth 5.3)
  • 95Wh battery
  • 720p IR webcam
  • Fingerprint reader

 

Previous

  • Dell Precision 7770, 7530, 7510, M4800, M6700
  • Dell Latitude E6520
  • Dell Inspiron 1720, 5150
  • Dell Latitude CPi
Link to comment
Share on other sites

I agree that climate change belongs at the bottom of the list, if included at all. The most widely embraced fake science hoax in the history of the human race, created to drive political agendas. 

  • Like 1

Wraith // Z790 Apex | 14900KF | 4090 Suprim X+Byksi Block | 48GB DDR5-8600 | Toughpower GF3 1650W | MO-RA3 360 | Hailea HC-500A || O11D XL EVO
Banshee // Z790 Apex Encore | 13900KS | 4090 Gaming OC+Alphacool Block | 48GB DDR5-8600 | RM1200x SHIFT | XT45 1080 Nova || Dark Base Pro 901
Munchkin // Z790i Edge | 14900K | Arc A770 Phantom Gaming OC | 48GB DDR5-8000 | GameMax 850W | EK Nucleus CR360 Dark || Prime AP201 
Half-Breed // Dell Precision 7720 | BGA CPU Filth+MXM Quadro P5000 | Sub-$500 Grade A Refurb || Nothing to Write Home About  

 Mr. Fox YouTube Channel | Mr. Fox @ HWBOT

The average response time for a 911 call is 10 minutes. The response time of a .357 is 1400 feet per second.

Link to comment
Share on other sites

There is a lot to unpack there already, I will respond in full later, but for now I would encourage anyone who hasn't seen it go through the Sam Altman interview video, I updated the link - not sure why it was showing up as a thumbnail, as it will give you more of a marketing (but occasionally brutally honest) view into the upcoming capabilities and consequences of AI. My favourite quote is: "The marginal value of intelligence will go to zero". There is also another interview with Satya Nadella, that's too cringy to repost I'm afraid, where he says something like "The society, for some reason, has decided to assign more value to software developers, than say to care workers, and this will change.".

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

There is no way AI can be predicted by any intelligence. Its like when Columbus sailed the ocean blue, everyone said he would fall off the edge of the earth!

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 RTX 3000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 P4000 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

18 minutes ago, aldarxt said:

There is no way AI can be predicted by any intelligence. Its like when Columbus sailed the ocean blue, everyone said he would fall off the edge of the earth!

 

I think we can look at the evolution and history of the Earth as valuable references. It's not the first time that more intelligent species/entities started to emerge in the history of our planet.

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

7 minutes ago, Etern4l said:

 

I think we can look at the evolution and history of the Earth as valuable references. It's not the first time that more intelligent species/entities emerged in the history of our planet.

You ever hear the term "Over the Top" ?

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 RTX 3000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 P4000 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

7 minutes ago, Etern4l said:

Of course, although looking at the way you wrote that, I haven't watched the movie. Ever heard the expression "boiling frog"? 

Not a movie. Just too big to predict. No Body can foresee where this may go, although adaptation is inevitable. Could anyone have foreseen the new continent when Columbus set sail the to United States!?

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 RTX 3000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 P4000 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

8 minutes ago, aldarxt said:

Not a movie.

 

But it is:

 

https://m.imdb.com/title/tt0093692/

 

;) 

 

Basically, what you are saying is: "whatever happens, happens, let's not worry about it." Hmm, what could go wrong... 

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

1 hour ago, Mr. Fox said:

I agree that climate change belongs at the bottom of the list, if included at all. The most widely embraced fake science hoax in the history of the human race, created to drive political agendas. 

Climate Change, it's been going on since the Ice Age! Animals have been found frozen with food still in the mouth! There was an instant freeze. And the ice has been melting ever since, as a matter of fact it is still melting! Does everybody think the earth is going to take into consideration if we are comfortable?

 

22 minutes ago, Etern4l said:

 

But it is:

 

https://m.imdb.com/title/tt0093692/

 

😉

 

Basically, what you are saying is: "whatever happens, happens, let's not worry about it." Hmm, what could go wrong... 

The Movie is off topic and irrelevant. And as far as AI goes it is inevitable. It will not be stopped! It is here to stay and we just have to deal with it. It may be very beneficial or disastrous, again nobody knows! Look at the discovery of fire!!! It could burn your hand off or keep you cozy while you cuddle with your significant other !

  • Like 1

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 RTX 3000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 P4000 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

22 minutes ago, aldarxt said:

The Movie is off topic and irrelevant. And as far as AI goes it is inevitable. It will not be stopped! It is here to stay and we just have to deal with it. It may be very beneficial or disastrous, again nobody knows! Look at the discovery of fire!!! It could burn your hand off or keep you cozy while you cuddle with your significant other !

Just a little joke, don't worry about the movie. 

 

That's what ChatGPT said! If it's right, and it probably is (unfortunately, I have to reveal I don't have much faith in humanity and our collective intelligence) - then we are done for. However, ChatGPT being both flawed and biased,  nothing is inevitable at this point, even though our odds look really poor with both the US capitalists and Chinese communists hard at work in the same fatal direction. 

 

I find the discovery of fire to be a poor analogy. Fire didn't start thinking for us, and eventually for itself. 

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

46 minutes ago, aldarxt said:

Does everybody think the earth is going to take into consideration if we are comfortable?

Indeed. It is both arrogant and specious to believe we are impacting climate change. It would happen without us. We are not that powerful and influential. But, we think we are, or so it appears.

 

Certainly, we don't want to deliberately spoil the water we drink or the air we breathe, but those distinct topics are easily confused and spun into the climate change narrative because there is a political agenda that needs to be fed. 

 

Everyone knows we should stop eating beef because it is cruel to cows, but even more importantly, because cow farts are causing global warming. If we stop eating beef, will the cows stop farting? Will the population of cows increase and thereby increase the frequency of cow farts? Should we solve this by killing all of the cows? But wait, that's animal cruelty. Uh-oh, now cows are an endangered species... OMG. We need a committee, a 5-year multi-billion dollar impact assessment, a new government agency, and new laws, to manage this problem or we're all going to die.

  • Thumb Up 1

Wraith // Z790 Apex | 14900KF | 4090 Suprim X+Byksi Block | 48GB DDR5-8600 | Toughpower GF3 1650W | MO-RA3 360 | Hailea HC-500A || O11D XL EVO
Banshee // Z790 Apex Encore | 13900KS | 4090 Gaming OC+Alphacool Block | 48GB DDR5-8600 | RM1200x SHIFT | XT45 1080 Nova || Dark Base Pro 901
Munchkin // Z790i Edge | 14900K | Arc A770 Phantom Gaming OC | 48GB DDR5-8000 | GameMax 850W | EK Nucleus CR360 Dark || Prime AP201 
Half-Breed // Dell Precision 7720 | BGA CPU Filth+MXM Quadro P5000 | Sub-$500 Grade A Refurb || Nothing to Write Home About  

 Mr. Fox YouTube Channel | Mr. Fox @ HWBOT

The average response time for a 911 call is 10 minutes. The response time of a .357 is 1400 feet per second.

Link to comment
Share on other sites

18 minutes ago, Mr. Fox said:

Indeed. It is both arrogant and specious to believe we are impacting climate change. It would happen without us. We are not that powerful and influential. But, we think we are, or so it appears.

 

Certainly, we don't want to deliberately spoil the water we drink or the air we breathe, but those distinct topics are easily confused and spun into the climate change narrative because there is a political agenda that needs to be fed. 

 

Everyone knows we should stop eating beef because it is cruel to cows, but even more importantly, because cow farts are causing global warming. If we stop eating beef, will the cows stop farting? Will the population of cows increase and thereby increase the frequency of cow farts? Should we solve this by killing all of the cows? But wait, that's animal cruelty. Uh-oh, now cows are an endangered species... OMG. We need a committee, a 5-year multi-billion dollar impact assessment, a new government agency, and new laws, to manage this problem or we're all going to die.

No.No,No!!! The farmers are being OVERLY Taxed because their cows Fart. If we stop eating beef the Taxes won;t be collected!!! And those crooks will go broke!

  • Haha 1

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 RTX 3000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 P4000 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

Major Existential Threat To Humanity

Well the same could be said about Nuclear Isotopes. did you know your Smoke Detector has plutonium-241 originating in nuclear reactors !!!

Maybe with enough smoke detectors I could recharge my batteries or even cook my food

  • Hands 1

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 RTX 3000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 P4000 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

13 minutes ago, aldarxt said:

Major Existential Threat To Humanity

Well the same could be said about Nuclear Isotopes. did you know your Smoke Detector has plutonium-241 originating in nuclear reactors !!!

Maybe with enough smoke detectors I could recharge my batteries or even cook my food

 

What should worry you more than plutonium-241 in smoke detectors is who offers better arguments and makes a better conversation: you or ChatGPT...

 

Generally, points taken sir/ma'am: nothing to worry about, it is what it is etc. Fair enough, probably what most people are thinking at this time. 

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

1 minute ago, Etern4l said:

 

What should worry you more than plutonium-241 in smoke detectors is who offers better arguments and makes a better conversation: you or ChatGPT...

Do you think ChatGPT would make a good lawyer? That might be a good idea. I bet the lawyers are going to use it in their cases. Just think if you get a AI lawyer and it does a bad job, you don't have to fire it. just hit Delete!

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 RTX 3000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 P4000 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

2 hours ago, aldarxt said:

Do you think ChatGPT would make a good lawyer? That might be a good idea. I bet the lawyers are going to use it in their cases. Just think if you get a AI lawyer and it does a bad job, you don't have to fire it. just hit Delete!

 

GPT-4 (the successor of the current ChatGPT) not just passed but aced the bar exam. Will be interesting to see how the legal profession responds to this. 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233

 

Generally, should things be allowed to continue along the current trajectory, in the medium term any intellectual profession will be either deeply affected or toast, resulting in an economic upheaval on a previously unseen scale. On a slightly longer timeline (to allow for better integration on the robotics side), human workforce will become mostly redundant. Clearly, at that point capitalist societies would collapse. All of this without taking into account any adversarial aspects of the technology; however, at that point a logical thing to do would be to just wipe the redundant protesting (dangerous, particularly in America!) humans out, either autonomously or by the order of the ruling billionaires (mindful of the lessons from the French Revolution). 

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

@aldarxtplease do not double post, thanks in advance :)

 

As for the topic at hand, happy to leave it as is, as I haven't seen much discussion on the topic personally without it going into shouting matches. That being said due to the secondary items which are each, all worth discussing at length. let's keep the topic rested in discussion surrounding AI and it's potential influence on society at large. 

 

Thanks everyone! 

  • Thumb Up 1
Link to comment
Share on other sites

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

"Resistance is futile, AI cannot be stopped" - straight out of a Terminator-style movie about AIs.  I think this "AI is an existential threat" is inspired by those movies, and over-hyped as a result.

 

Thanks for the responses. A lot to go through here, I will try to be brief... 


The fact that the adversarial AI scenario has been featured prominently (but not exclusively, e.g. Asimov's sweet and naive "Bicentennial man") and literature is completely irrelevant. It's just like saying "nuclear threat? Pfft, there've been so many movies about that, it's overhyped".

 

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

And one of the data sets being fed into this AI is sci-fi movies and literature.

 

First of all, the AI of most concern (I didn't want to muddy the waters, but the industry terms this "AGI" - Artificial General Intelligence), would not generally have to rely on a more or less human-curated dataset to learn from. But, even if we incorrectly assume that such a dataset must be involved, two questions remain:

1. Would it be possible for someone to feed the AI an adversity-promoting dataset? The answer is: of course.

2. Even if we take utmost care in what data we feed to the AI, is it possible that even a moderately intelligent AI would come to pretty obvious conclusions such as:

* Most humans are not very smart, hence of somewhat limited utility

* The Earth is overpopulated

* Armed humans are a particular threat

* Humans are territorial and aggressive

* Humans don't really care enough about the environment and could well destroy the planet

* Humans use a lot of resources

etc. 

The above are pretty much statements of fact, and should an AI be in a position to implement a remediating policy, we would be in trouble.

 

 

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

Once we have self-replicating giant death robots I'd bump the threat level up a few notches,

 

At that point it would be way too late, clearly you are downplaying the risk for some reason.

 

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

for now I'm not sure how a knowledge bot could cause societal collapse, other than perhaps contributing to the problem of misinformation on the Internet causing political crises, due to its woeful lack of regard for factual accuracy.

 

Again, where do I start here...

ChatGPT and the underlying technology are much more than "chat bots" - there is a risk a sentient AI might take our calling its ancestor ChatGPT that as an insult.

As for the immediate impact, I answered that earlier. Basically, you now have machines which can communicate on a human level (I'm sure this could have passed the Turing test, if they trained it to do so - which they wouldn't let us know about for obvious PR reasons: downplaying the risk is the name of the game), which can generate images and art at a level where experts have to argue its merits. It can generate computer code, write essays, pass MBA exams, pass bar exams etc. It's an insane breakthrough, pretty much nobody thought was possible a year or two ago.

This will deeply impact at least 50% of all jobs. Hope it's clear why.

 

We could stop here, but this is just a tip of the iceberg. Clearly light sci-fi is becoming reality before our very eyes (HAL-9000 is around the corner), using the current technology. The problem is: what if the technology progresses any further? And of course, as things stand it will: we are not too far off from actually being able to build your Skynet and a basic terminator (and yet again, to focus just on this one scenario would be a grave mistake).

 

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

So I'd order it:

 

It's not so much about the particular order, but the fact that AI is easily in top 5 if we are being honest. Why are we engineering another gigantic problem for our civilisation? The answer is simple: because, as a whole, we are fairly greedy and stupid.

 

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

5. AI.  Really, this is "the impacts of AI we don't fully understand."

 

That may be true of laymen's understanding (which is very dangerous, although understandable given the novelty and complexity), but hundreds of AI researchers recently signed an open letter calling for a "pause and think" on advanced AI research. I mentioned the likely concrete impacts a few times earlier in the thread.

 

 

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

And again, those AI scenarios are largely "over-delegating" issues.  Say the U.S. or China builds an army of Boston Dynamics-style autonomous combat robots, hundreds of thousands of them, and they misidentify their own country as the enemy.  Probably not the end of humanity (thanks, oceans), but could be bad.  Or, worse, a nuclear-connected AI system falsely detects incoming enemy nukes and decides to fire the missiles.

 

Overdelegation is the basic scenario, rehashed in the popular culture since the 1980s ("War games" for instance). But even in your example, you are making a couple of naive (given our current vantage point already) assumptions, e.g.:

1. An advanced AI wouldn't be able to take control by force or deceit (wouldn't be too much of stretch to image an incident not in a Wuhan bio lab, but AI lab, of course could happen Stateside or anywhere else)

2. The AI wouldn't be able to use 1. to control any long distance weapons ...

 

Basic as this scenario may be, it could well have fatal consequences, and we have no way to absolutely preventing it from happening other than by really banning any use of advanced AI in military decision making, and harshly enforcing this globally.

 

 

On 4/13/2023 at 1:37 PM, Sandy Bridge said:

If you make that system fully autonomous, and the same thing happens, goodbye humanity.  But that could have happened in the 1980s, and also still counts for the "nuclear war" category.

 

Yes, but that would have been a pure nuclear/basic automation threat. The reason why the system was not fully automated in the 1980s is because the system wasn't sophisticated enough (unlike the fictional one depicted in "War games"). No longer the case, hence we are dealing with a whole new risk (or one grave facet of a new risk).

Clearly this problem would not exist in absence of an advanced AI, hence it's an AI risk first, nuclear second.

 

That said, it's likely that any adverse effects of AI will be multi modal. Perhaps the most immediate issues is the risk of AI triggering a global economic collapse and or a nuclear war, this way or another. 

 

On 4/13/2023 at 2:56 PM, Aaron44126 said:

There is a lot of benefit to be had from AI as well.  It could lead to increasingly rapid advances in many areas (computing, medicine, building and vehicle construction/design, ...).  An exciting time to live (but also sort of scary).

 

Yes, there are short-term benefits, which - as always - are propelling the machinery at the moment. There is very little to no long-term planning built into capitalism.

That said, I would say the vast majority of those gains can be realized with the technology we have already. No need to go any further and risk everything just so Elon Musk, Bill Gates, Sergey Brin and Mark Zuckerberg can become the first (and last) human trillionaires on the planet.

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

I agree brother. Tlsr this is going to open pandoras box. We really don't know if logic at its core is good or bad. But you brought up some good points and kudos for taking this seriously this is no joke and yes just 1 year ago today I was thinking we would have what we have now in 20 years. Fast forward 1 year and things are moving too fast. I was thinking if governments stop and rethink ai will the world's axis of evils do the same. S just got real

  • Like 1

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

You weren't alone. Most people with some knowledge of the subject thought this was many years if not decades away. In 2016 Google's AlphaGo beat the world champion Lee Sedol in Go, a game previously regarded as basically intractable at human level by AI. Now Google themselves got kind of caught with their pants down.

 

A lot of "impossible/intractable/maybe in 20-50 years" things have been happening in the recent years (the funny thing is: most of it was enabled by gamers and Nvidia), so we really need to try and extrapolate. For most people that's basically impossible given the specialist knowledge required. People can just listen to what the industry leaders have to say (e.g. the interview with Altman), but that will necessarily be a carefully filtered marketing pitch delivered with rose-tinted glasses on.

 

As for AGI, people have previously thought, again: impossible - needs a soul, requires quantum computing, a century away, requires 100 MW of power etc. Well, I don't think too many people would be repeating those projections now, and if those wishful estimates are proven wrong I think we are finished, like gorillas looking at those weird leaner apes running around with sharp sticks and making curious - yet impossible to make sense of - sounds.

  • Bump 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

Why do you think I posted the thread we might live forever. Imagine the implications in biology and medicine. Like I heard our generation might be the first to see people living to a 1000 now it's more real and it could be 100 000. In just over a year the life we live has and will change in ways we can't even imagine.

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

Oh boy, I've been following the topic fairly closely. At the moment wishful thinking doesn't even begin to cover this. For starters, we are nowhere near curing cancer. Would AGI help? Maybe it would, maybe it wouldn't. But, why would it? How would you force it to do it? Is the problem solvable at all? How expensive would the "immortality treatment" be? Would anyone apart from billionaires be able to afford it? How would we control Earth's population? Sorry bro. 

 

A bit off-topic BTW.

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

  • Etern4l changed the title to AI: Major Emerging Existential Threat To Humanity
  • Reciever locked this topic
  • Reciever unlocked this topic

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use