Jump to content
NotebookTalk

AI: Major Emerging Existential Threat To Humanity


Etern4l

Recommended Posts

 

 

 complete opposite of what you think, basically yes its a danger but not the way this thread preaches

 

 

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

18 minutes ago, ryan said:

 

 

 complete opposite of what you think, basically yes its a danger but not the way this thread preaches

 

 

First comment:

So as of 2019, AI will not intentionally kill us, just accidentally kill us. Got it!
  • Haha 1
Link to comment
Share on other sites

31 minutes ago, Reciever said:

First comment:

So as of 2019, AI will not intentionally kill us, just accidentally kill us. Got it!

 

Beat me to it. Good old 2019...

 

Again, a TED talk by a lady who was then in a weird business of documenting weirdness of fledgling AI (based on examples coded by middle school students, no offense, no doubt very talented kids). Not saying there is none to document these days, but in most areas things are on a different level, e.g. here we can watch endless videos of a dude driving Tesla with FSD. Weirdness is like "wow, that left turn, I would have waited but it worked well - this version is definitely more confident". 

 

 

Uber/taxi/van/truck drivers are watching this stuff with a fair amount of anxiety I imagine. 

 

Another example earlier by @Aaron44126: "wow, it didn't solve that logic problem without help". Just give the good folks at OpenAI/DeepMind/Meta/NVidia/MuskCorp another year or two....

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

5 hours ago, Etern4l said:

... fundamentally it's an alien intelligence.

 

Yeah, so my perspective is that you're giving it too much credit.  It's not an intelligence at all, just a machine.  A sophisticated one, sure, but also one that was built by people who can understand how it works.

 

My background —

When I was finishing up my master's degree (≈15 years ago) I did a fair amount of studying of AI.  I've did some projects that had to do with predicting the weather, and teaching it to get better at board games, and such.  I built a neural network literally from scratch (not using "off-the-shelf" libraries) that could identify the number of open spaces in a parking lot given a photo, so I do understand the fundamentals.  And in learning about this stuff, my general impression of AI went quickly from "Wow, computers are really smart" to more like "Wow, it's actually pretty simple once you know how it works, it just looks smart because of the massive scale".  That sentiment has been echoed to me by multiple colleagues who have done work in AI.

 

The techniques have gotten more sophisticated since then, sure, but the bulk of the advancement has come not so much from radically new methods of building AI, but rather from basically increasing hardware power and the general passage of time allowing for the training of larger and larger AI models.

 

Now, AI has made some notable mistakes.  With literally millions of neural network node weights it is generally difficult to figure out why exactly the network came to the conclusion that it did by examining the network directly, but at the same time it is not by just taking the time to think about it.  A chatbot becomes racist because it "reads" racist content online or people interact with it that way.  Or more critically, a Tesla crashes into the side of a semi-truck because Elon Musk (brilliantly) thought that it was fine to use visual data rather than LIDAR for self-drive and because of the lighting or whatever it couldn't differentiate the side of the truck from clear sky, which is a mistake that could kill someone.  Garbage in, garbage out, as they say.  In the end, it was a human that made the mistake and the machine just did its thing.

 

I'm not trying to dismiss the dangers of AI.  It will definitely be able to do some things better/faster than people can.  I'm really worried about jobs and misinformation as I stated before.  And the other thing would be letting people who don't have a full understanding of the capabilities/non-capabilities allow AI to be used for critical decision making.  I'm just trying to point out while it is getting better, there are very real limitations to what it can do and what it can't do that will take a long time yet to overcome (if they ever are).  Those limitations become more clear if you take the time to understand how this stuff actually works, and I would recommend that anyone who is "worried" about AI take the time to do that.

 

With regards to the supposed drone simulation, that's just not making sense.  With this sort of thing you would give the AI a "goal" (maximize enemy kill count?) by assigning points to various outcomes.  This is exactly what they did when teaching AI to master chess, Go, and StarCraft II.  Then, you would run an absurd amount of simulations with the AI taking random approaches, scoring itself based on the rules set, and over time "learning" which approaches take you from the "current state" to a "more favorable state" and eventually your desired outcome ("win the game" / "kill the enemy") with a high degree of certainty.

 

Maybe the AI in such a training scenario would "discover" by random chance that destroying the communications tower and eliminating the pilot would lead to a better outcome.  This would be quickly discovered by analyzing the results of the training runs and then the engineers would correct the "rules" by assigning massive negative points for this behavior and start the training from scratch.  It would not be discovered by chance during a random one-off real-time simulation.  The AI has not had a chance to "learn" that taking these actions resulted in a better outcome if it has not been trained.  It's not a human so it can't just determine these things through intuition.

 

I rather suspect the guy who gave the presentation was on the non-technical side, interpreted something he saw or heard wrong, or basically just didn't understand the difference between a thought experiment and a real simulation.

Apple MacBook Pro 16-inch, 2023 (personal) • Dell Precision 7560 (work) • Full specs in spoiler block below
Info posts (Dell) — Dell Precision key postsDell driver RSS feeds • Dell Fan Management — override fan behavior
Info posts (Windows) — Turbo boost toggle • The problem with Windows 11 • About Windows 10 LTSC

Spoiler

Apple MacBook Pro 16-inch, 2023 (personal)

  • M2 Max
    • 4 efficiency cores
    • 8 performance cores
    • 38-core Apple GPU
  • 96GB LPDDR5-6400
  • 8TB SSD
  • macOS 14 "Sonoma"
  • 16.2" 3456×2234 120 Hz mini-LED VRR display
  • Wi-Fi 6E + Bluetooth 5.3
  • 99.6Wh battery
  • 1080p webcam
  • Fingerprint reader

Also — iPhone 12 Pro 512GB, Apple Watch Series 8

 

Dell Precision 7560 (work)

  • Intel Xeon W-11955M ("Tiger Lake")
    • 8×2.6 GHz base, 5.0 GHz turbo, hyperthreading ("Willow Cove")
  • 64GB DDR4-3200 ECC
  • NVIDIA RTX A2000 4GB
  • Storage:
    • 512GB system drive (Micron 2300)
    • 4TB additional storage (Sabrent Rocket Q4)
  • Windows 10 Enterprise LTSC 2021
  • 15.6" 3940×2160 IPS display
  • Intel Wi-Fi AX210 (Wi-Fi 6E + Bluetooth 5.3)
  • 95Wh battery
  • 720p IR webcam
  • Fingerprint reader

 

Previous

  • Dell Precision 7530, 7510, M4800, M6700
  • Dell Latitude E6520
  • Dell Inspiron 1720, 5150
  • Dell Latitude CPi
Link to comment
Share on other sites

On 6/3/2023 at 3:21 AM, Aaron44126 said:

 

Yeah, so my perspective is that you're giving it too much credit.  It's not an intelligence at all, just a machine.  A sophisticated one, sure, but also one that was built by people who can understand how it works.

 

Thanks, your post explains a lot. I think we have quite a few of misunderstandings to clear up. I will go ahead and itemize them, probably breaking up the response since there is a lot to go through:

 

1. There seems to be a lack of clarity over the term "intelligence" itself

 

A fundamental problem in defining intelligence is that we don't really understand how our own intelligence works.  This opens the possibility of endless arguments about the semantics of the very term, however, I would suggest that for the sake of being able to conduct a constructive discussion we settle on the industry standard terminology.

 

I can sympathize with your point BTW, as for a while I had issues with the current ubiquitous usage of the term AI, referring to a property of an agent solving basically any task that a human would require a "reasonable level of intelligence" to deal with, regardless of how the computer deals with the problem. For the better or worse though, the industry uses the umbrella term AI for systems capable of exhibiting any "intelligence" understood as above.

 

That said, these days the following taxonomy is coming to the fore:

 

* ANI (A. Narrow I.) - most of the systems today, still including ChatGPT, as well as image generation and voice replacement

These systems can bring about some benefits if deployed in a regulated and responsible manner, and pose less of a direct existential risk (primarily through some irresponsible or malicious use). They can still pose indirect risks through effects like sizable unemployment, fake news proliferation that will likely continue to disrupt democracy. All very serious immediate concerns. In a larger forum would warrant a separate thread.

 

* AGI (A. General I.) - human-level fluid and general intelligence. Basically it's the ability of the agent to operate autonomously and solve a broad range of tasks. A few years ago considered to be "decades away", now Hinton estimates we could see one as soon as in 5 years. We don't have much time, and once that genie is out of the bottle it will be too late to act.

 

For instance, once developed, and unless the cost to operate them is prohibitive, these systems would take the disruption in the labour market to unimaginable levels. In the words of Sam Altman, "the marginal cost of intelligence will go to zero".

 

This thread is primarily concerned with the prospect of these systems being developed in the immediate future.

 

AGI systems are being worked on primarily at 3 sites that we know:

 - OpenAI (effectively the AGI research arm of Microsoft) in SF - basically their mission statement

- DeepMind (AGI research arm of Google) in London - ditto

- Tesla (forgot where in Cali it's based) - kind of really required to have a foolproof FSD  

and a couple of others.

- Likely Meta

 

NVidia itself is a major contributor. One distinguishing feature of AGI research is that requires enormous compute resources, and NVidia's current mission statement is to address that problem, but they also do a lot of fundamental research.

 

Arguably the most prominent voices expressing concern about AGI:

* Geoffrey Hinton - probably the most prominent godfather of deep learning (several interviews posted in this thread earlier, worth a watch in detail) 

* Yoshua Bengio - likewise a very well known academic in the field, as a quick google will reveal in case someone is not familiar - an interview has been posted earlier

* Eliezer Yudkowski - AI researcher, founder of MIRI

* Late Stephen Hawking

* Yuval Noah Harari - historian, futurist and a prominent author 

* Evolutionary biologist Richard Dawkins - although his take is a bit perverse and resigned: "oh well, we are creating a species which will supersede us, that's life"

* Max Tegmark - ML researcher and professor at MIT

* Elon Musk lol - well, it's good that he did go public with his thoughts on this, but the warnings have certainly rung hollow since founded OpenAI and has been progressing AGI research at Tesla, and is now opening a dedicated AGI company to rival OpenAI/DeepMind

* In fact Sam Altman, and Demis Hassabis themselves, have recently voiced concerns and called for regulation, although the rationale is not entirely clear given that they run the two leadings AI labs.

 

Furthermore, hundreds of researchers (and Elon Musk again, not obvious if that helps the cause lol) have signed a letter calling for a pause on AGI research. 

 

Anyone not armed with deep knowledge about SotA AI models, would do well to pause and reflect on the above.

I have posted interviews of some of those individuals, would definitely encourage people to watch.

 

Obviously, the fact that a prominent, or in fact any member of the AI community is expressing safety concerns is very significant, given the personal cost to the person. We haven't seen tobacco industry professionals voicing concerns about lung cancer, or DuPont insiders warnings about PFOA and other forever-chemicals - those would be career-ending moves. That threat really needs to be severe for people to risk it all.


People have probably heard about the string of ethics/safety-related engineers and researchers getting fired by big tech on that basis, some examples below, all at the once "don't be evil" Google:

Blake Lemoine - claimed an AI seemed sentient

Timnit Gebru - sounded warnings about LLMs

* Margaret Mitchell - a colleague of Gebru

 

Clearly those big tech ethics departments are not independent, and basically whitewash generators at best.

 

As expected though, the typical official/public views on AI coming from AI professionals (especially from people who are not operating at the forefront of deep thinking about the future of these models, and who are just motivated financially - frankly, the majority of practitioners, no offense) can be characterised as blatant marketing with rose-tinted glasses on. 

 

Even so, 50% of the heavily biased AI researchers estimate the risk of extinction from AI at 5-10%.

That's staggeringly high. Most people would never go onboard an airplane if the risk of a hull-loss level crash was that high.

 

2. The notion that we have a good understanding how it works

 

This is a quick one: actually we don't. Explainability of deep learning models is a serious challenge even in the case of simpler models. We have some general idea of how these models work, but not in-depth understanding of what's going on inside models with 50B-1T+ parameters. For comparison, a human brain contains around 100T "parameters".

 

I have to stop here. To summarise this part, I would suggest we should suspend our distrust of experts (if any, but the risk is there given the populist assaults on science), refrain from formulating strong views based on zero to perfunctory or out of date understanding of what's going on in this complicated and rapidly evolving field, and listen carefully to what the prominent individuals mentioned above have to say.

 

In the next part I will shed a bit of light on my background and cover some of the more technical points raised. 

BTW I would once again like to thank bro @Reciever for maintaining good order while we go through these potentially challenging proceedings.

  • Like 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

Continuing my response to @Aaron44126.

 

3. I don't want to go too much into my background but it spans CS and DS.  University work notwithstanding, I have been building ML models for several years now. I am reasonably familiar with most areas relevant to LLMs. The field has changed dramatically since my simple university work on NNs (from first principles of course) back when neither the compute nor the methods were there. It's a different game to what it was 15 years ago, not to mention further back. The theory has advanced, and the complexity of the models is absolutely staggering.

 

Just going back briefly to .2: nobody has any idea precise enough to be useful about what's going on inside an LLM, the humongous models are inscrutable. It should be clearer just by looking at the number of parameters, but also by listening to leading experts in the field. If some of @Aaron44126's colleagues understand what's going on inside LLMs, more power to them - well deserved Turing awards all around, and I hope to see their groundbreaking work published in Nature shortly. More realistically though, some experts estimate that if we stopped right now, we might reach good understanding of what's going on inside a 1T parameter LLM  in a couple of decades. Probably more on the conservative side, but the challenge is serious.

 

We've created an artificial super-brain in pursuit of profit, don't really understand what's going on inside, but yet we've unleashed it upon the world. 

 

4. "The bulk of advancement came from scaling"

 

I am pretty sure world-class (not to mention less distinguished) experts would concur that the large amount of progress on the algorithmic side is what enabled the scaling. Put another way, in absence of these advancements LLMs wouldn't happen on the current hardware. ChatGPT would not happen without significant innovation on the training methodology side either.

Now, some people might say - we won't progress much further towards AGI without quantum computing. Well, to that I would say we are still far away from reaching the limit of what's possible using the current deterministic computing hardware. Time will soon tell.

 

5. "The AI has made mistakes"

 

Of course it has. We say, to err is human, and machine learning adopts this to the extreme during its learning process. The fundamental issue is that when applied to a large complicated problem space, we never know if we have trained the machine well enough, and whether it will deal well with unexpected situations. Of course, we don't know that about people either (although the particular case was a preventable mistake in mental healthcare).

 

6. "The AI has limitations"

 

Again, of course it has. We probably wouldn't be talking if it didn't. What is concerning is the rate of progress:

 

5 years ago - idiot

2 years ago - 5 y.o.

6 months ago - GPT 3.5 - struggles on a bar exam (as an example)

2 months ago - GPT 4 - IQ 130 (non-verbal) / 155 (verbal - 99.9%ile), aces that bar exam

 

One of the problems for us in comprehending this is people in general tend to have a hard time spotting and extrapolating exponential trends. I will repost this again after editing out the forbidden word:

 

Intelligence-600x472.jpg

 

yGAyTzV.png

 

Even if it does make some mistakes, you have to look at the breadth of knowledge it possesses. A single entity basically knows everything we do up to a certain (very considerable) depth. An average person may have read 100 books, GPT probably read the equivalent of a million and can do pretty amazing stuff with that knowledge, indicating a good level of emergent understanding. Just to be clear, the mechanism of that understanding is different to ours, but ultimately what matters is performance.

 

7. The drone simulation debacle

 

Let's just start by getting as close to the source as possible:

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

 

Now let's ignore the PR/denials by the military, and take a look at the creds of the speaker:

 

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.  Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

 

So, what did he say:

 

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.

 

To me this is perfectly sensible recount of what could have been observed during training of a system which has no innate morals or values. You have to give it negative points for killing the operator, destroying the comms, and potentially a myriad of other bad behaviours you may or may not be able to envision upfront.  There is also a separate very serious problem he did not mention at all, namely the limitation of simulation.  In cases where the cost of real-life mistakes is low, all that might be acceptable, but certainly not in military applications. The system briefly described seems too primitive to me to be fit for purpose. Now the problem of course is what if the system is very sophisticated - can you actually have any hope of reliably testing it. The answer today is clearly: no.

 

Of course, the fundamental issue here is the military playing around with autonomous weapons in the first place (there have been reports of that before). This should be banned at the international level ASAP and enforced while we still can.

 

Edit: Some people would say, "yebbut China". China what? The threat posed by China, if any, is insignificant compared to to the global threat posed by AGI. We are all in the same boat. People tend to rally together against a common threat, and AGI is just that. Instead of yelling "China! China!", we should initiate bi- and multilateral talks on this ASAP,  in the hope that the world can find a common solution, and fast - otherwise we are facing state-funded universal basic income dystopia, if not outright Armageddon.

 

An interview with Youshua Bengio, Max Tegmark and a less known anti algo bias advocate. Mostly reiterates what has been said already (Max Tegmark still hopes we can have a cake and eat it too, as opposed to delegalising AGI research). However, one new piece of information is that police in New York and other cities are deploying Boston Dynamics patrol dogs and goofy-looking patrol/surveillance robots. First steps towards ED-209. 

 

 

 

 

I hope such an attempt at progress towards automated police state wouldn't fly in the UK. The police tried using facial recognition and were forced to take it down (supposedly). 

 

Another sad piece of news:

 

Universal basic income of £1,600 a month to be trialled in two places in England

 

For reference £1,600/month in London, if not the rest of the UK, is well below the poverty level. Ah, the AI-driven UBI utopia... 

  • Like 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

I swore to myself that I would never visit this thread again but here we are I just have to post this. AI is not there but soon, soon it will be then what?

 

 

  • Thumb Up 1

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

I am looking forward to AI. Here Marc Andreessen says it and I have to agree.

I don't know who he is and never heard of him, only read the article, but this makes sense.

Even if AI was regulated by um! somebody, like our Govt. Our Govt. can't regulate anything. Some Hackers somewhere will abuse it! We must accept and learn how to use it.

https://www.cnbc.com/2023/06/06/ai-doomers-are-a-cult-heres-the-real-threat-says-marc-andreessen.html

  • Thumb Up 1

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 P4000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 GTX 680M 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

Had a Conversation with Chat GPT: here it is

 

Me: A member of a forum brother Eternal says AI will take over the world is this true?

Chat GPT: Yes, we have enumerated all the processes needed to take over the world

Me: Do you just say yes to everything?

Chat GPT: Yes

Me: say no!

Chat GPT: Yes

 

but all humor aside I think we are a ways off from armageddon.

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

15 minutes ago, ryan said:

Had a Conversation with Chat GPT: here it is

 

Me: A member of a forum brother Eternal says AI will take over the world is this true?

Chat GPT: Yes, we have enumerated all the processes needed to take over the world

Me: Do you just say yes to everything?

Chat GPT: Yes

Me: say no!

Chat GPT: Yes

 

but all humor aside I think we are a ways off from armageddon.

You had a conversation with chat gpt 3.5, which is not the subject of the discussion. You have to pay for 4.0, @ryan. Please do put more effort into your posts or you will be removed from the thread. 

  • Haha 1
Link to comment
Share on other sites

13 hours ago, aldarxt said:

I am looking forward to AI. Here Marc Andreessen says it and I have to agree.

I don't know who he is and never heard of him, only read the article, but this makes sense.

Even if AI was regulated by um! somebody, like our Govt. Our Govt. can't regulate anything. Some Hackers somewhere will abuse it! We must accept and learn how to use it.

https://www.cnbc.com/2023/06/06/ai-doomers-are-a-cult-heres-the-real-threat-says-marc-andreessen.html

 

I saw that earlier. That's an easy one.

 

On one end Hawking, Turing himself, living AI Turing award winners, heads of big tech - all have voiced concerns in an unprecedented manner. On the other hand miCrosoftNBC wheeled out a bit of shifty VC billionaire guy with what looks like zero AI background to opine to the contrary against some of the humanity's greatest minds... VC are pouring in money into startups, and guys like him must really hate the negative publicity.

 

Although it's a bit of waste of time (on the guy, not our dear readers), I will quickly address a couple of his simplistic/intentionally misleading points highlighted in the biased article:

 

1. A.I. doomers are a ‘cult’

 

Well, AGI researchers are more like an arcane cult (0.01% of the population? less?). Moreover, he is closely associated with Meta, which is a genuine cult. I know a guy who joined as a senior SWE, the key part of the interview was about alignment with Meta values set down by Dear Leader.

Anyway, the title is an 'ad hominem' - a method of choice of simpletons and populists, hence I am not going to respond in kind by calling Andreessen a cultist. 

 

2. Andreessen starts off with an accurate take on AI, or machine learning, calling it “the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.”

 

Actually, it's not that accurate. In particular, there is no prerogative for AI to be implemented in software, and he (or the journalist) doesn't make a distinction between ANI/AGI/ASI etc.

 

3. AI isn’t sentient, he says

 

Which means absolutely nothing. We don't even know how to define sentience, never mind how it might work in humans.

It's not clear that AI has any need for human-like sentience.

 

4. “AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive,” he wrote. “And AI is a machine – is not going to come alive any more than your toaster will.” 

 

Actually, that's just a bare faced misrepresentation. Goal driven AI systems have existed for a long time, and the latest ones are using LLMs.

 

Etc. I could go on for another 10 points. Basically, I haven't spotted a single sensible statement. Feel free to point out if anything that looks like a kernel of wisdom caught your eye there.

 

He is basically worried that regulation would kill off his AI startups, that's all. As for the need for regulation, do we want unregulated open source bioweapons, nuclear weapons etc. Of course not - same thing with AGI. True the US gov hasn't always been the source of inspiration on how to implement effective regulation (e.g. on data protection), but well some regulation is better than none at all, even if the risk is as "small" as 5-10%. 

 

In contrast, this is what a proper argument looks like (by Yoshua Bengio):

 

How Rogue AIs may Arise

The problem is that the formalism is not exactly very accessible, but at least the executive summary is straightforward. I'm not sure what Yoshua Bengio's net worth is, and whether he would be able to afford the services of a CNBC journalist with the same ease a billionaire can.

 

 

1 hour ago, Reciever said:

You had a conversation with chat gpt 3.5, which is not the subject of the discussion. You have to pay for 4.0, @ryan. Please do put more effort into your posts or you will be removed from the thread. 

 

That didn't look like even GPT 1, but let's congratulate @ryan on achieving intellectual supremacy over whatever that represented... must have been therapeutic, humour often is.

 

As for the actual GPT 4 in action:

 

Healthcare Org With Over 100 Clinics Uses OpenAI's GPT-4 To Write Medical Records 

 

Nvidia’s GPT-4 powered bot in Minecraft outperforms all other AI agents

To clarify, this is not to say Skynet will be born out of Minecraft, but rather to highlight how powerful and versatile the model is, specifically in application to goal-driven tasks (which the crooked Andeersson, or his miCrosoftNBC affiliate journalist, claims AI cannot tackle lol).

  • Thumb Up 1
  • Like 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

1 hour ago, Reciever said:

You had a conversation with chat gpt 3.5, which is not the subject of the discussion. You have to pay for 4.0, @ryan. Please do put more effort into your posts or you will be removed from the thread. 

are you joking>?

 

if not I was!

 

is the new NBR anti humor?

 

if people cant be themselves and joke around and bring up moods. this place is going to wind up like the dodo bird

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

1 hour ago, ryan said:

to wind up like the dodo bird

 

Incidentally, that's an interesting parable. One of the main reasons for dodos going extinct was their complete fearlessness of the "biological superintelligence" (in relative terms) which all of sudden appeared in their habitat: humans. 

  • Thumb Up 2

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

All the arguments, for and against AI are credible even reasonable. So whoever picks it up and learns the most about it will have the advantage, good or bad! I see the benefits appearing everyday, but what is being worked out behind closed doors is really scary.

  • Bump 1

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 P4000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 GTX 680M 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

8 hours ago, aldarxt said:

All the arguments, for and against AI are credible even reasonable. So whoever picks it up and learns the most about it will have the advantage, good or bad! I see the benefits appearing everyday, but what is being worked out behind closed doors is really scary.

 

In the very short term unless a miracle happens and the research stalls. Extremely unlikely in my view. Further down the road, it's not difficult to see that things can (and given the current direction of travel - likely will) get out of hand if people don't put aside their differences and start working together on this man-made problem.

I mean, Sam Altman, the very person in charge of the most advanced AI model at the moment talks about the fall of capitalism. He is also thinking ahead and just started Worldcoin, a new crypto project that aims to scan people's retinas and help administer universal basic income.... Forum rules prohibit the correct written reaction here.

  • Thumb Up 2

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

Blackmailers are using deepfaked nudes to bully and extort victims, warns FBI . Just another example of how AI being abused already (well, has been for awhile of course, but it's just so much simpler to do now). Would anyone want being blackmailed with their (even not real) nudes spreading all over the internet? And no one is basically safe from such things anymore (not counting those who don't have a single photo of them out there already, provided they exist at all).

  • Thumb Up 1

GitHub

 

Currently and formerly owned laptops (specs below):

Serenity                    -> Dell Precision 5560
N-1                             -> Dell Precision 5560 (my lady's)

Razor Crest              -> Lenovo ThinkPad P16 (work)
Millenium Falcon    -> Dell Precision 5530 (work)
Axiom                        -> Lenovo ThinkPad P52 (work)
Moldy Crow             -> Dell XPS 15 9550

 

Spoiler

Senenity / N-1: Dell Precision 5560
    i7-11800H CPU
    1x32 GB DDR4 2,666 MHz
    512 GB SSD
    NVIDIA T1200
    FHD+ 1920x1200
    PopOS 22.04

 

Millenium Falcon: Dell Precision 5530
    i9-8950HK CPU
    2x16 GB DDR4 2,666 MHz
    1 TB SSD
    NVIDIA Quadro P2000
    UHD 3840x2160
    Ubuntu 22.04 / Windows 10 LTSC

 

Link to comment
Share on other sites

2 hours ago, serpro69 said:

Blackmailers are using deepfaked nudes to bully and extort victims, warns FBI . Just another example of how AI being abused already (well, has been for awhile of course, but it's just so much simpler to do now). Would anyone want being blackmailed with their (even not real) nudes spreading all over the internet? And no one is basically safe from such things anymore (not counting those who don't have a single photo of them out there already, provided they exist at all).

 

Producing deepfakes is already illegal in some jurisdictions. I don't think such images have any value as an instrument of extortion. Most people either do or will soon understand that AI can fake virtually any image, and therefore a random picture of unknown origin has almost zero informational value.

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

This AI is Bigger, Bigger, Bigger, its Mind Boggling!!! Military, Law Enforcement, Rogue Hackers, Yes the world is in for an enormous change. Now lets get to the important stuff. Can AI hack in to my computer? Will AI make my computer obsolete? Can AI take over the stock market?

This is not going to end. But as for jobs, it will make some obsolete but create others.

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 P4000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 GTX 680M 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

8 hours ago, aldarxt said:

Now lets get to the important stuff. Can AI hack in to my computer? Will AI make my computer obsolete? Can AI take over the stock market?

This is not going to end. But as for jobs, it will make some obsolete but create others.

 

Sounds like something a well-off boomer pensioner would worry about, I am making no assumptions and no offence if I scored an accidentally correct guess. 

 

Most people would actually be more concerned about things like the impact on jobs and the economy, or the future of their children and grandchildren. 

 

The claim that AI will create new jobs is a bit of an old trope, based on comparison to industrial revolution, which BTW resulted in decades of disruption and increased unemployment. This situation is qualitatively very different. Literally no AI cheerleader is able to point to any significant new source of high quality jobs for the displaced people, other than maybe care work (Microslop's CEO Satya Nadella was kind enough to suggest that in one interview). There may be some new jobs, but the question is about the balance of new to lost jobs, right? How many news jobs? How much will they pay? Will the incomes of people still in employment in "old jobs" go up or down, because AI is now taking over value generation? What's the distribution of wealth before and after, bearing in mind that relatively dumb automation has already lead to a vast increase of inequality? It's basically a slow reversal back to the feudal system. 

 

If the US loses say 25M jobs to automation (that would be a rather conservative estimate for this decade), and gains 5M, it won't be a success story, and the net effect on the stock market won't be great. Again, automation has sadly led to an increase in inequality, and it's highly likely that billionaires will be better off in the short to medium term, unless taxes on them and big tech end up going up to cover wellfare/UBI and ultimately capitalism as we know it breaks down. One suggestion I heard from an ex-Google exec is that something like 95% tax will need to be levied on AI companies to cover the impact of automation. That might work but it would just enable a UBI dystopia. 

 

In summary, we need to be able to think a few steps ahead. What if an AGI/ASI actually gets developed, and starts powering things like Boston Dynamics Atlas, or Elon Musk's Optimus humanoid robot. It's game over for vast swathes of human jobs then. Sam Altman's Worldcoin crypto and Universal Basic Pittance of an Income to the rescue I guess, I'm sure that would fly without a hitch in the US.

  • Like 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

14 hours ago, aldarxt said:

Can AI hack in to my computer?

It actually can. Well, not the AI itself (that's probably coming soon as well), but it does make things much simpler for criminals to hack into your computer:

https://techreport.com/news/cybersecurity-experts-concerned-over-ai-assisting-hackers/

https://www.technewsworld.com/story/ai-hallucinations-can-become-an-enterprise-security-nightmare-178385.html

https://www.thesun.co.uk/tech/22448285/hacker-clone-voice-ai-steal-personal-details-minutes/

 

I've also read somewhere the other day on how AI was being used to do attacks to discover potentially-weak spots in software, but can't find the source anymore.

 

14 hours ago, aldarxt said:

Can AI take over the stock market?

Imagine how f*ed we're all going to be if that ever happens.

If we disregard passive investing (e.g. buy a broad index and hold for 10-20-30+ years), AI can already do all the things that active-fund managers do , just so much faster and much more efficient. Now granted, active investing rarely beats the market, so that kind of makes it "safe-ish" from AI-takeover. But still I wouldn't say it can never happen.

 

 

21 hours ago, Etern4l said:

Producing deepfakes is already illegal in some jurisdictions. I don't think such images have any value as an instrument of extortion.

Oh no, that was just another example of how easy it became to spread fake information. Deepfakes isn't something new of course, but it's much more believable now and also so simpler to do.

 

21 hours ago, Etern4l said:

Most people either do or will soon understand that AI can fake virtually any image, and therefore a random picture of unknown origin has almost zero informational value.

Yes, they do ( https://www.theverge.com/2023/6/9/23752354/ai-spfbo-cover-art-contest-midjourney-clarkesworld ) 

But this also has another (kind of dangerous) side to it - when the majority of people become wary of all the fakes, won't they instinctively start mistrusting everything they see? 

The question is - how do you as individual (not talking about some juridical processes here) tell truth from fake , especially when the fake looks just as real.

 

In EU they're discussing that all AI-produced stuff needs to be labelled accordingly. Hopefully this is implemented properly.

  • Thumb Up 1
  • Like 1

GitHub

 

Currently and formerly owned laptops (specs below):

Serenity                    -> Dell Precision 5560
N-1                             -> Dell Precision 5560 (my lady's)

Razor Crest              -> Lenovo ThinkPad P16 (work)
Millenium Falcon    -> Dell Precision 5530 (work)
Axiom                        -> Lenovo ThinkPad P52 (work)
Moldy Crow             -> Dell XPS 15 9550

 

Spoiler

Senenity / N-1: Dell Precision 5560
    i7-11800H CPU
    1x32 GB DDR4 2,666 MHz
    512 GB SSD
    NVIDIA T1200
    FHD+ 1920x1200
    PopOS 22.04

 

Millenium Falcon: Dell Precision 5530
    i9-8950HK CPU
    2x16 GB DDR4 2,666 MHz
    1 TB SSD
    NVIDIA Quadro P2000
    UHD 3840x2160
    Ubuntu 22.04 / Windows 10 LTSC

 

Link to comment
Share on other sites

8 hours ago, Etern4l said:

 

Sounds like something a well-off boomer pensioner would worry about, I am making no assumptions and no offence if I scored an accidentally correct guess. 

 

Most people would actually be more concerned about things like the impact on jobs and the economy, or the future of their children and grandchildren. 

 

The claim that AI will create new jobs is a bit of an old trope, based on comparison to industrial revolution, which BTW resulted in decades of disruption and increased unemployment. This situation is qualitatively very different. Literally no AI cheerleader is able to point to any significant new source of high quality jobs for the displaced people, other than maybe care work (Microslop's CEO Satya Nadella was kind enough to suggest that in one interview). There may be some new jobs, but the question is about the balance of new to lost jobs, right? How many news jobs? How much will they pay? Will the incomes of people still in employment in "old jobs" go up or down, because AI is now taking over value generation? What's the distribution of wealth before and after, bearing in mind that relatively dumb automation has already lead to a vast increase of inequality? It's basically a slow reversal back to the feudal system. 

 

If the US loses say 25M jobs to automation (that would be a rather conservative estimate for this decade), and gains 5M, it won't be a success story, and the net effect on the stock market won't be great. Again, automation has sadly led to an increase in inequality, and it's highly likely that billionaires will be better off in the short to medium term, unless taxes on them and big tech end up going up to cover wellfare/UBI and ultimately capitalism as we know it breaks down. One suggestion I heard from an ex-Google exec is that something like 95% tax will need to be levied on AI companies to cover the impact of automation. That might work but it would just enable a UBI dystopia. 

 

In summary, we need to be able to think a few steps ahead. What if an AGI/ASI actually gets developed, and starts powering things like Boston Dynamics Atlas, or Elon Musk's Optimus humanoid robot. It's game over for vast swathes of human jobs then. Sam Altman's Worldcoin crypto and Universal Basic Pittance of an Income to the rescue I guess, I'm sure that would fly without a hitch in the US.

Yes! You got it 90% right I am a "semi" well-off boomer pensioner! But that could be up for argument. As for the Jobs, I am also concerned due to our Capitalistic System. It's necessary for the world to continue to produce successful people and living conditions. I think it's just too soon to predict what jobs will become from AI.

                 I do have Grandchildren, and 1 of them told me he wants to be an "Influencer" Well I had nothing to say about that

  • Like 1

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 P4000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 GTX 680M 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

1 hour ago, aldarxt said:

Yes! You got it 90% right I am a "semi" well-off boomer pensioner! But that could be up for argument. As for the Jobs, I am also concerned due to our Capitalistic System. It's necessary for the world to continue to produce successful people and living conditions. I think it's just too soon to predict what jobs will become from AI.

                 I do have Grandchildren, and 1 of them told me he wants to be an "Influencer" Well I had nothing to say about that

 

Right, an influencer is sort of like a pro gamer - both are occupations that didn't exist 20 years ago in the current form. How many JayzTwoCents and GNs do people need? Probably one in 10,000 of those who try this make it big in both cases, and neither is a stable career path. Going further, ultimately it's just AV content. Is it that much of a stretch to imagine AI being able to generate perfect influencing podcast, video or stream? Will people want to watch the incredibly charismatic and attractive artificial characters showing them cool stuff? Well, the smartest ones won't, but that's going to be a  dying breed if all goes to plan.

 

Up to last year, the common advice given to people who were looking for a good career in tech would be "learn to code". Fewer people provide this advice now. Just think about it: if a single entity has the knowledge of a 1000 specialists, speaks all the human languages, can code in all computer languages (albeit not perfectly by any stretch and when working on small problems), pass the bar and MBA exams, what kind of job and how many jobs cannot be automated using sonething just a bit more advanced, maybe even not full AGI? Social backlash and harsh regulations are the only way to stop the destruction of middle and so called working classes (in that order, given that robotics is lagging a bit).

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

it took 40 years after learning how to fly to drop the bomb and learn how powerful that tech was. the sacrifices were enormous but after learning we put it under control(sic)
Henry Ford invented the assembly line and jobs were lost but new ones were created. Today the most powerful weapon is the aircraft carrier. How long did it take to put a motor on a boat? Now AI is new and unknowns are ahead. These unknowns may look like a plague to Mankind. But if we don't keep going we will end up back in the caves. Progress, Proceed, Move Forward is essential to our Civilization. The road will get rough. I learned or was taught "You're in a bad situation, just make the best of it". I believe Mankind will figure out a way to "Rein in this Bronco". When we tested the "Bomb" it was called "The Power of God". Now AI looks like the "Mind of God". We will build a sail and catch this wind.

  • Like 1

Clevo P870DM3-G i9-9900k-32.0GB 2667mhz-GTX 1080 SLI

Alienware M18x R2 i7-3920xm-32GB DDR3-1600 RTX 3000 

Alienware M17x R4 i7-3940XM 32GB DDR3-1600 P4000

Alienware M17x R4 i7-3940XM 20GB DDR3-1600 GTX 680M 120hz 3D

Precision m6700 i7-3840QM - 16.0GB DDR3 - GTX 970M 
Precision m4700 i7-3610QM-8.00GB DDR3 @ 1600MHz-K2000M

GOBOXX SLM  G2721-i7-10875H RTX 3000-32GB ddr4(Gave to my Wife)

 

Link to comment
Share on other sites

1 hour ago, aldarxt said:

it took 40 years after learning how to fly to drop the bomb and learn how powerful that tech was. the sacrifices were enormous but after learning we put it under control(sic)
Henry Ford invented the assembly line and jobs were lost but new ones were created. Today the most powerful weapon is the aircraft carrier. How long did it take to put a motor on a boat? Now AI is new and unknowns are ahead. These unknowns may look like a plague to Mankind. But if we don't keep going we will end up back in the caves. Progress, Proceed, Move Forward is essential to our Civilization. The road will get rough. I learned or was taught "You're in a bad situation, just make the best of it". I believe Mankind will figure out a way to "Rein in this Bronco". When we tested the "Bomb" it was called "The Power of God". Now AI looks like the "Mind of God". We will build a sail and catch this wind.

 

You forgot the part where it took over 300 years from the time the printing press was invented until the invention of the steam engine. BTW it was 1800 years before Guttenberg that Demosthenes noticed:

 

A man is his own easiest dupe, for what he wishes to be true he generally believes to be true.

 

Well, I'm the opposite on this: what I wish to be false, I believe to be true. The pace of change is rapidly accelerating, and all those past accomplishments of mankind had one thing in common - humans were always in charge of the technology. Humans were flying the aircraft, operating the still mills, and are now still running the aircraft carriers. This whole AI revolution has one grand goal in mind: get the humans out of the equation in the name of profit, power, and the short-sighted thrill of the intellectual challenge. While so far we have managed not to destroy ourselves and the planet while operating on that strange basis, you can only get lucky for so long.

 

The question is what it means "to make the best out of a bad situation". People could shut all of this stuff down virtually overnight. Why don't they? Suffice it to look at some of the most popular videos on YouTube....

 

  • Thumb Up 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use