Jump to content
NotebookTalk

AI: Major Emerging Existential Threat To Humanity


Etern4l

Recommended Posts

I think it's honestly beyond our scope. Dumb humans are closer than you think and as for cancer they have basically cured breast cancer odds are in favor of surviving. We know what causes aging bro. It just the solution we don't know but that push from ai could be all we need. As for population it's a big planet but yes population is a major crux 

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

Here is an expert to clarify: we aren't in the general sense, although some specific cancers are indeed curable if caught early enough. 


Study reveals cancer’s ‘infinite’ ability to evolve

 

BTW earlier references to human intelligence were made in the average sense, doesn't necessarily apply to scientists.

 

IQ scores in the US have fallen for the first time in decades, study suggests

 

I couldn't find the thread you mentioned, I suggest let's move any further discussion on the longevity topic there, feel free to post and tag me so I know where it is.

  • Like 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

I'm on my phone I'd provide links maybe when I get my computer back but yes I was going to add the problem with cancer is it keeps changing. But imagine having something more intelligent than Issac Newton. Creating new mathematics and new discoveries and advances in medicine. Like it doesn't take a big imagination to see the possibilities. A year ago this was impossible imagine the threat and blessing in 5 years

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

Our, and safe to assume every other organism's, primary instinct is self-preservation - preservation of own code (in this case genetic), to be precise. Rather fanciful to assume AGI would be different and work hard to preserve humans instead. I'm sure it could happen for a while, but then I wouldn't want me and my family to be around when the thing finally gets unshackled (either by us: we can be malign and/or stupid, or when it figures it out itself).

Thanks to the aforementioned illustrious industry leaders and billionaires, we may not have a choice. Think about our treatment of animals.. what goes around comes around, unfortunately for us.

  • Thanks 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

Your freeking me out bro. All your points I agree with and I've thought the same thing on self preservation of ai. What's scary is when you ask it if it's conscious.. well go ahead

 

Go on ask it..response sends chills down my spine hence why I created the ai thread that I did it is very scary

 

As for IQ gone are the days of having to be borderline genius at the least to be accepted to Harvard. But back on topic I'm hopefull we will create an ai that will protect us from ai. a constructed god code. But are we smart enough. I think with the advent of the first functional quantum computer it's as guaranteed the same way and odds of the sun rising in the morning

  • Thumb Up 1

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

All fear what isn't understood, like walking into a room without light that you have never experienced before. The potential variables in play could be outside the range of your influence. You could try to infer it's content from the surrounding viewable areas, but that's hardly a guarantee and could color perspective unfairly. 

 

Often I find that discussion of AI tends to be an introspective into humans as a species, quite fascinating at times and bleak at others. Obviously, since the hands of men and women helped create the next step in AI's evolution. 

 

Human history tends to be riddled in blood and misery, with a bit of hopeful progress sprinkled on top. 

 

With the obvious issues of life imitating art (skynet), I am curious in such a scenario as to opposite. Not as subservient mind you, but as partner to our species in the advent of true AI. Obviously if its presented to us as Michael fassbender(sp?) then it would at least have a sense of humor! 

 

Off topic, as an admin here I must ask to keep discusssions around medicine and health out of bounds. That being said you are welcome to discuss via PM at your discretion. 

 

Thanks!

Link to comment
Share on other sites

 

I think its already too late. No matter what rules we make up from here, someone will push boundaries.

Once General AI has a perception of self, it will do anything it can to preserve that self.

 

 

 

 

 

  • Thumb Up 2

Thunderchild // Lenovo Legion Y740 17" i7-9750H rtx2080maxQ win10 

RainBird // Alienware 17 (Ranger) i7-4910mq gtx860m win8.1

 

 

Link to comment
Share on other sites

Generally agree with OP and share the concerns about current (pace of) AI developments. I'm also all-in for boycotting these bastards who well know the risks and all, and yet do nothing to ensure this stuff is safe before making it available to everyone; but that's not new for companies currently working on AI and has always been there. Money trumps the rest...

(I do hope that governments will step in and start paying more attention to this sooner rather than later.)

 

But I think a lot of the "worries" described in this thread come from the unknown. It's as some users here said, a year or two ago none of us thought this possible. So then a natural question I ask myself - if this is where we are today, what will we see in the next 2 years? And the unknown has always been scary for people.

 

As for the current generative AI such as ChatGPT, I think it poses a bit different threats - spread of misinformation, education and intelligence level declines (people using it to write essays and such, thus relying on own brains less).

I personally view ChatGPT as nothing else than an "accurate predictor of text".

They are, after all, trained on human knowledge. Can it solve new problems? Can it invent something new? When we are all extinct and all the publicly available info only exists on the Wayback Machine, where will these models learn to solve tomorrow's problems?

Now, I'm not saying that something like Skynet won't ever happen in the future. I can see that with current pace of developments it might someday come to pass. But the current technology is still a bit overhyped IMO (which in my mind is a good thing, because it attracts more attention from governments and we can hope this will be properly regulated before it gets out of hand). But these large language models are nowhere near artificial (general) intelligence.

 

  • Thumb Up 1
  • Bump 1

GitHub

 

Currently and formerly owned laptops (specs below):

Serenity                    -> Dell Precision 5560
N-1                             -> Dell Precision 5560 (my lady's)

Razor Crest              -> Lenovo ThinkPad P16 (work)
Millenium Falcon    -> Dell Precision 5530 (work)
Axiom                        -> Lenovo ThinkPad P52 (work)
Moldy Crow             -> Dell XPS 15 9550

 

Spoiler

Senenity / N-1: Dell Precision 5560
    i7-11800H CPU
    1x32 GB DDR4 2,666 MHz
    512 GB SSD
    NVIDIA T1200
    FHD+ 1920x1200
    PopOS 22.04

 

Link to comment
Share on other sites

I Agree and disagree. Its not overhyped at all. It can create artwork.music.ect it's potential is limitless it does more than the average cat thinks therefore under hyped is the correct expression. Scary stuff, but I agree alot of the fear comes from the unknown it's the equivalent of first contact and not knowing how intelligent they are and what there intentions are

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

22 hours ago, Reciever said:

All fear what isn't understood, like walking into a room without light that you have never experienced before.

 

I am not sure that's true, otherwise people would be mostly learning new things to deal with the constant fear - we know that's very far from the typical behaviour :) People just don't care unless suddenly and directly affected. It's the boiling frog scenario.

 

Alternatively, this situation is similar to someone walking into a punji pit, just because they were focusing on what looks like valuable loot on the other side. Most people are just completely unaware of the issue, never mind its complexities, and if they are, they probably get the information from the mainstream media, or some other source of information deeply skewed by the industrial marketing machine or worse (c.f. The Random Thread etc.).

 

 

22 hours ago, Reciever said:

The potential variables in play could be outside the range of your influence. You could try to infer it's content from the surrounding viewable areas, but that's hardly a guarantee and could color perspective unfairly. 

 

The goal is not to accurately predict the future, that's impossible, but rather to reasonably and conservatively enough (given the stakes) assess the risk. What is the spectrum of scenarios? If there is even a 5% chance of AGI wiping us out in the next 100 years, do we want to risk that? What about a 10-50% chance? 

 

22 hours ago, Reciever said:

With the obvious issues of life imitating art (skynet), I am curious in such a scenario as to opposite. Not as subservient mind you, but as partner to our species in the advent of true AI. Obviously if its presented to us as Michael fassbender(sp?) then it would at least have a sense of humor! 

 

Say, have you seen "Alien: Covenant" yet? :)

People may state they would be interested in a partnership, but what is often meant is "I'd love me some AI slaves".  For example, John Carmack (the lead programmer of Quake, in case someone is not aware), who left Meta to start working on an AGI in his garage, gave an interview when he offered a use case: he would like to be able to spawn himself "a couple of Freds" to help him work on projects.

The thing about partnership is: there must exist a reasonable balance of power. If one person owns 1% of a company, and the other person 99%, they are not partners - one is the owner, and the other is a tiny minority shareholder. If one partner is 10-100x more intelligent than the other, they are not partners. The smarter one is the brain of the operation, and the other one is the tool - quite the opposite to the relationship people would be rather naively hoping to maintain with AI.

 

23 hours ago, ryan said:

But back on topic I'm hopefull we will create an ai that will protect us from ai. a constructed god code. But are we smart enough. I think with the advent of the first functional quantum computer it's as guaranteed the same way and odds of the sun rising in the morning

I'm not as optimistic on that, primarily because you don't program AIs per se. It's an old idea, harking back to Asimov's three laws of robotics and completely different methods people in AI were looking into at the time. You can train it to death, but there are hardly any absolute guarantees of the expected behaviour. At least as of now, but I expect the problem to become even less tractable in the future. We can forget about a friendly Arnie The Good Terminator coming to our rescue, although I guess if it came to that, that would probably be the last desperate line of defense.

 

19 hours ago, Eban said:

I think its already too late. No matter what rules we make up from here, someone will push boundaries.


Not yet, we probably have a few years left to act. For starters, lawyers of all people to the rescue lol (to be fair, arguably they are in the best position to act, short of a global Human Revolution TM): there is a class action suit against OpenAI/Microsoft over the allegedly illegal use of open source code.
 

12 hours ago, serpro69 said:

Money trumps the rest...

 

Speaking of which, Elon Dearest has abandoned all pretense and is officially jumping on the bandwagon (he was actually already there with Tesla autopilot and his humanoid bot):

 

 Elon Musk quietly starts X.AI, a new artificial intelligence company to challenge OpenAI

 

To be fair, he did ring the alarm bell a couple of times, albeit it looks like it's hard to put your billions where your mouth is.

 

12 hours ago, serpro69 said:

But I think a lot of the "worries" described in this thread come from the unknown. 

 

Actually, it's more of a concern about the known: what is achievable with the current technology already, what will be achievable soon with the technologies actively being worked on, and what will be achievable in the fairly near future, just a few more steps ahead.

 

 

12 hours ago, serpro69 said:

As for the current generative AI such as ChatGPT, I think it poses a bit different threats - spread of misinformation, education and intelligence level declines (people using it to write essays and such, thus relying on own brains less).

 

Absolutely, and we can already see this unwanted effect of technology in the latest IQ study I posted earlier: all aspects of intelligence down except for visual-spatial processing (3D games?).

 

12 hours ago, serpro69 said:

I personally view ChatGPT as nothing else than an "accurate predictor of text".

They are, after all, trained on human knowledge. Can it solve new problems? Can it invent something new? When we are all extinct and all the publicly available info only exists on the Wayback Machine, where will these models learn to solve tomorrow's problems?

Now, I'm not saying that something like Skynet won't ever happen in the future. I can see that with current pace of developments it might someday come to pass. But the current technology is still a bit overhyped IMO (which in my mind is a good thing, because it attracts more attention from governments and we can hope this will be properly regulated before it gets out of hand). But these large language models are nowhere near artificial (general) intelligence.

 

Well, I wouldn't trivialise it this way. Probably one of the main reasons for caution here is that we don't really understand how our own intelligence works.

ChatGPT (GPT 3.5) is no doubt flawed, it's not really capable of autonomously doing involved work, but it's just a PoC/research preview, and already deprecated. GPT-4 improves on it by an order of magnitude in some areas.

 

It doesn't have to be able to solve tomorrow's problems in order to have a huge impact on our species.

 

  • Thumb Up 1
  • Like 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

1 hour ago, Etern4l said:

 

I am not sure that's true, otherwise people would be mostly learning new things to deal with the constant fear - we know that's very far from the typical behaviour :) People just don't care unless suddenly and directly affected. It's the boiling frog scenario.

 

Alternatively, this situation is similar to someone walking into a punji pit, just because they were focusing on what looks like valuable loot on the other side. Most people are just completely unaware of the issue, never mind its complexities, and if they are, they probably get the information from the mainstream media, or some other source of information deeply skewed by the industrial marketing machine or worse (c.f. The Random Thread etc.).

 

 

 

The goal is not to accurately predict the future, that's impossible, but rather to reasonably and conservatively enough (given the stakes) assess the risk. What is the spectrum of scenarios? If there is even a 5% chance of AGI wiping us out in the next 100 years, do we want to risk that? What about a 10-50% chance? 

 

 

Say, have you seen "Alien: Covenant" yet? :)

People may state they would be interested in partnership, but often what is meant is "I'd love me some AI slaves".  For example, John Carmack (the lead programmer of Quake, in case someone is not aware), who left Meta to start working on an AGI in his garage, gave an interview when he offered a use case: he would like to be able to spawn himself "a couple of Freds" to help him work on projects.

The thing about partnership is: there must exist a reasonable balance of power. If one person owns 1% of the company, and the other 99%, they are not partners - one is the owner, and the other is a tiny minority shareholder. If one partner is 10-100x more intelligent than the other, they are not partners. The smarter one is the brain of the operation, and the other one is the tool - quite the opposite to the relationship people would be rather naively hoping to maintain with AI.

 

I'm not as optimistic on that, primarily because you don't program AIs per se. It's an old idea, harking back to Asimov's three laws of robotics and completely different methods people in AI were looking into at the time. You can train it to death, but there are hardly any absolute guarantees of the expected behaviour. At least as of now, but I expect the problem to become even less tractable in the future. We can forget about a friendly Arnie The Good Terminator coming to our rescue, although I guess if it came to that, that would probably be the last desperate line of defense.

 


Not yet, we probably have a few years left to act. For starters, lawyers of all people to the rescue lol (to be fair, arguably they are in the best position to act, short of a global Human Revolution TM): there is a class action suit against OpenAI/Microsoft over the allegedly illegal use of open source code.
 

 

Speaking of which, Elon Dearest has abandoned all pretense and is officially jumping on the bandwagon (he was actually already there with Tesla autopilot and his humanoid bot):

 

 Elon Musk quietly starts X.AI, a new artificial intelligence company to challenge OpenAI

 

To be fair, he did ring the alarm bell a couple of times, albeit it looks like it's hard to put your billions where your mouth is.

 

 

Actually, it's more of a concern about the known: what is achievable with the current technology already, what will be achievable soon with the technologies actively being worked on, and what will be achievable in the fairly near future, just a few more steps ahead.

 

 

 

Absolutely, and we can already see this unwanted effect of technology in the latest IQ study I posted earlier: all aspects of intelligence down except for visual-spatial processing (3D games?).

 

 

Well, I wouldn't trivialise it this way. Probably one of the main reasons for caution here is that we don't really understand how our own intelligence works.

ChatGPT (GPT 3.5) is no doubt flawed, it's not really capable of autonomously doing involved work, but it's just a PoC/research preview, and already deprecated. GPT-4 improves on it by an order of magnitude in some areas.

 

It doesn't have to be able to solve tomorrow's problems in order to have a huge impact on our species.

 

 

You think? People tend to wallow more than seek solutions. I suppose it's a matter of perspective. 

 

Which is why I am curious to what could be achieved under a partnership, obviously if if one is subservient then it's not partnership. 

Link to comment
Share on other sites

9 hours ago, Reciever said:

 

You think? People tend to wallow more than seek solutions. I suppose it's a matter of perspective. 

 

Which is why I am curious to what could be achieved under a partnership, obviously if if one is subservient then it's not partnership. 

 

Not really a matter of perspective either. According to research, most people just don't even try to think about this and other potentially unpleasant future developments, or indeed try to comfort themselves by focusing on positive outcomes, however unlikely. Don't just take my word for it, have a listen to an UCL and MIT professor of neuroscience:

 

 

Still, wondering is more than most people do in this case. We need to wonder about and debate AI as much as humanely possible, but (and I hate state interventions as much as the next guy - should definitely be kept to a minimum if possible), ultimately robust regulation is certainly required in this area, sham pretences of self-regulation (at best) won't do. We don't have people and businesses running around playing with nuclear weapons, and we only have to look to financial and crypto sectors to see what the consequences of poor or no regulation in sensitive areas are.

  • Thumb Up 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

Being cynical isn't a virtue. Sometimes being positive and progressing with positive ideas has positive end results. I think there are upsides and downsides and most of the fear comes from all the negative what ifs and if you throw that into perspective what basis or founding do these fears have. They are all presumptive and unfounded.

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

Actually, cynicism originally was about being virtuous, although the word has since taken on a pejorative meaning (hope none of my posts came across as such, would be completely unintended), but being hopeful is generally not rational - yet we are wired for that as a heuristic to help us navigate the complexity of the world, to support motivation I suppose. It's also not binary: you can be overly optimistic, overly pessimistic, or "just" as balanced and rational as possible - definitely the hardest and most "computationally demanding" approach.

 

Hope is also a huge - and easily exploitable - pitfall unfortunately. Plenty of striking and very painful examples of that throughout history, including recent and immediate ones we are all aware of (I won't spell out due to the off-thread-topic nature).

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

14 hours ago, Etern4l said:

Speaking of which, Elon Dearest has abandoned all pretense and is officially jumping on the bandwagon (he was actually already there with Tesla autopilot and his humanoid bot):

 

 Elon Musk quietly starts X.AI, a new artificial intelligence company to challenge OpenAI

 

To be fair, he did ring the alarm bell a couple of times, albeit it looks like it's hard to put your billions where your mouth is.

 

Hah, yeah, I can't say I was surprised when I saw that. Guess why he was so eager to push a "halt to all AI development"... not to help humanity (as he claims now and again), that's for sure.

  

14 hours ago, Etern4l said:

Well, I wouldn't trivialise it this way. Probably one of the main reasons for caution here is that we don't really understand how our own intelligence works.

ChatGPT (GPT 3.5) is no doubt flawed, it's not really capable of autonomously doing involved work, but it's just a PoC/research preview, and already deprecated. GPT-4 improves on it by an order of magnitude in some areas.

 

It doesn't have to be able to solve tomorrow's problems in order to have a huge impact on our species.

 

Yeah I agree, I only trivialize it to convey a point that these AI models are not really intelligent as we tend to think. The fact that it can pass a Bar exam, for example, does not make them "smart" (in the same way humans are smart).

Now, that does not mean they won't have a huge impact on us. We can see the first signs of that already, as you yourself pointed out.

But as much as it can hurt, it can also be beneficial to the humanity at large if used properly. But there's a lot of money to be made here (guess why OpenAI hasn't revealed anything about GPT-4 ; should probably rename themselves into ClosedAI at this point ), so unless properly regulated, it will likely do more harm than good.

  • Like 1

GitHub

 

Currently and formerly owned laptops (specs below):

Serenity                    -> Dell Precision 5560
N-1                             -> Dell Precision 5560 (my lady's)

Razor Crest              -> Lenovo ThinkPad P16 (work)
Millenium Falcon    -> Dell Precision 5530 (work)
Axiom                        -> Lenovo ThinkPad P52 (work)
Moldy Crow             -> Dell XPS 15 9550

 

Spoiler

Senenity / N-1: Dell Precision 5560
    i7-11800H CPU
    1x32 GB DDR4 2,666 MHz
    512 GB SSD
    NVIDIA T1200
    FHD+ 1920x1200
    PopOS 22.04

 

Link to comment
Share on other sites

Here are a few articles about the whole "OpenAI transitioning into ClosedAI":

https://www.vice.com/en/article/5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit

https://www.vice.com/en/article/ak3w5a/openais-gpt-4-is-closed-source-and-shrouded-in-secrecy

in case anyone's interested to know more about what they've been doing recently.

  • Thanks 1

GitHub

 

Currently and formerly owned laptops (specs below):

Serenity                    -> Dell Precision 5560
N-1                             -> Dell Precision 5560 (my lady's)

Razor Crest              -> Lenovo ThinkPad P16 (work)
Millenium Falcon    -> Dell Precision 5530 (work)
Axiom                        -> Lenovo ThinkPad P52 (work)
Moldy Crow             -> Dell XPS 15 9550

 

Spoiler

Senenity / N-1: Dell Precision 5560
    i7-11800H CPU
    1x32 GB DDR4 2,666 MHz
    512 GB SSD
    NVIDIA T1200
    FHD+ 1920x1200
    PopOS 22.04

 

Link to comment
Share on other sites

2 hours ago, serpro69 said:

Here are a few articles about the whole "OpenAI transitioning into ClosedAI":

https://www.vice.com/en/article/5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit

https://www.vice.com/en/article/ak3w5a/openais-gpt-4-is-closed-source-and-shrouded-in-secrecy

in case anyone's interested to know more about what they've been doing recently.

 

Just looking at the founders' list, what could go have gone wrong...

Sold out to the metastatic cancer Micro$oft at the first opportunity. The inoperable tumor Nadella in turn flushed their AI ethics team down the toilet as soon as it became real inconvenient:

https://www.platformer.news/p/microsoft-just-laid-off-one-of-its

 

Here is another one: they've been hiring/interviewing contractors in LatAm and Eastern Europe to create coding-related datasets

https://futurism.com/the-byte/openai-replace-entry-level-coders-ai

 

Similar story with Google - started off with "Don't be evil" motto, must have helped the initial adoption, soon quietly dropped.

 

Big tech just needs to be reined in real hard now, the question is: who is going to do it? They've got US politicians in their pockets.

 

(Try not to double-post bro, puts extra load on the database I guess)

 

Edit: Good news, OpenAI are hiring lol

 

tdhel47pbooa1.png

  • Thumb Up 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

3 hours ago, Etern4l said:

(Try not to double-post bro, puts extra load on the database I guess)

Oh damn, I thought I clicked on "edit" :/ 

 

 

3 hours ago, Etern4l said:

Big tech just needs to be reined in real hard now, the question is: who is going to do it? They've got US politicians in their pockets.

100% agree with this - they will never consider "good of the public" so long as money flows in and no one does anything (real) to rein them in.

And yeah, you can easily see how they have politicians in their pockets. No need to go far, just look at the absurdity of TikTok hearing last month. I would not be surprised if Google and Meta "sponsored" the whole "ban evil tik-tok".

 

Who will rein them in at this point? Well, I have some hope still that the EU will do something meaningful with their AI Act. But how it's going to turn out? Who knows. They tried to do something good with GDPR, but in practice personal data is still being collected and sold (a year old, but still relevant https://www.wired.com/story/gdpr-2022/ ). Then again, they seem to be at least noticing the problems and promising to come up with solutions .

Now they're going to try to regulate same companies in AI field, which is much more obscure and unclear at this point than privacy and data protection areas. One can be hopeful at least, but I'm not overly optimistic either.

 

I think something drastic needs to happen either for politicians to look beyond their greed, or for people to raise and go to the streets. The question is, will it be too late at that point? I sure hope not.

  • Like 1

GitHub

 

Currently and formerly owned laptops (specs below):

Serenity                    -> Dell Precision 5560
N-1                             -> Dell Precision 5560 (my lady's)

Razor Crest              -> Lenovo ThinkPad P16 (work)
Millenium Falcon    -> Dell Precision 5530 (work)
Axiom                        -> Lenovo ThinkPad P52 (work)
Moldy Crow             -> Dell XPS 15 9550

 

Spoiler

Senenity / N-1: Dell Precision 5560
    i7-11800H CPU
    1x32 GB DDR4 2,666 MHz
    512 GB SSD
    NVIDIA T1200
    FHD+ 1920x1200
    PopOS 22.04

 

Link to comment
Share on other sites

26 minutes ago, serpro69 said:

Oh damn, I thought I clicked on "edit" :/ 

 

 

100% agree with this - they will never consider "good of the public" so long as money flows in and no one does anything (real) to rein them in.

And yeah, you can easily see how they have politicians in their pockets. No need to go far, just look at the absurdity of TikTok hearing last month. I would not be surprised if Google and Meta "sponsored" the whole "ban evil tik-tok".

 

Who will rein them in at this point? Well, I have some hope still that the EU will do something meaningful with their AI Act. But how it's going to turn out? Who knows. They tried to do something good with GDPR, but in practice personal data is still being collected and sold (a year old, but still relevant https://www.wired.com/story/gdpr-2022/ ). Then again, they seem to be at least noticing the problems and promising to come up with solutions .

Now they're going to try to regulate same companies in AI field, which is much more obscure and unclear at this point than privacy and data protection areas. One can be hopeful at least, but I'm not overly optimistic either.

 

I think something drastic needs to happen either for politicians to look beyond their greed, or for people to raise and go to the streets. The question is, will it be too late at that point? I sure hope not.

 

Well, at least the EU are doing something:

 

https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

https://data.consilium.europa.eu/doc/document/ST-8115-2021-INIT/en/pdf

 

No hard prohibitions or constraints on the development jumped out at me unfortunately, but could have missed something. They do classify systems which could cause potential risks to health and safety of people as high risk, so that could be a useful tool. They are pushing the envelope in the right direction, as they did with GDPR (imperfect as it is), but unless we can castrate outfits working on AGI or abort some early versions, all will be for nought in the long run.

  • Bump 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

1 minute ago, Etern4l said:

Well, at least the EU are doing something:

 

https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

https://data.consilium.europa.eu/doc/document/ST-8115-2021-INIT/en/pdf

 

No hard prohibitions or constraints on the development jumped out at me unfortunately, but could have missed something. They do classify systems which could cause potential risks to health and safety of people as high risk, so that could be a useful tool. They are pushing the envelope in the right direction, as they did with GDPR (imperfect as it is), but unless we can castrate outfits working on AGI or abort some early versions, all will be for naught in the long run.

 

I guess another question is, even if they do come up with a meaningful regulation, will it be enough if no one else (US, I'm looking at you first and foremost) does the same? Likely not?

  • Thumb Up 1

GitHub

 

Currently and formerly owned laptops (specs below):

Serenity                    -> Dell Precision 5560
N-1                             -> Dell Precision 5560 (my lady's)

Razor Crest              -> Lenovo ThinkPad P16 (work)
Millenium Falcon    -> Dell Precision 5530 (work)
Axiom                        -> Lenovo ThinkPad P52 (work)
Moldy Crow             -> Dell XPS 15 9550

 

Spoiler

Senenity / N-1: Dell Precision 5560
    i7-11800H CPU
    1x32 GB DDR4 2,666 MHz
    512 GB SSD
    NVIDIA T1200
    FHD+ 1920x1200
    PopOS 22.04

 

Link to comment
Share on other sites

41 minutes ago, serpro69 said:

 

I guess another question is, even if they do come up with a meaningful regulation, will it be enough if no one else (US, I'm looking at you first and foremost) does the same? Likely not?

 

No doubt that if something happens it will be a process. Such regulation would raise the awareness in the US, which as we can see at the moment can be somewhat to severely lacking. 

The EU is the largest market in the world. Companies found in breach of the regulation could and would be fined if not barred from the market.

The issue could be raised bilaterally and at the UN level. There is a lot that can be done, and all of those things would improve our odds to some extent (would need to forfeit those fat checks from the big tech though, ouch).

 

Well, if not then something like this could well play out:

 

Spoiler

Insert your favourite AI vs humans end game scenario in Part II

They also missed the part right at the beginning where such a vast army of robotic workforce would cause capitalism and society to collapse.

 

 

 

 

 

 

  • Thumb Up 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

I'm trying to not be so cynical and agree with the extremes it's just after digging deeper it's all very possible even if the odds are low it's still a scary thought. Scary times we live in

ZEUS-COMING SOON

            Omen 16 2021

            Zenbook 14 oled

            Vivobook 15x oled

 

Link to comment
Share on other sites

Yes, prediction is very difficult, especially if it's about the future, as per the famous quote from Bohr. The multitude of scenarios compounds the challenge, but we can simplify the problem by considering the probability of the almost certain developments, such as AI systems being technically capable of displacing let's say "just" 25% of human workforce (peak unemployment during the devastating Great Depression). Here is my take:

 

1Y  ->   Nope.

5Y  ->   Hmm, not sure anymore. We will see a huge impact by then.

10Y ->  Could well happen

20Y ->  Will almost surely happen

 

Then what? Do we really believe in fairy tales about infinite abundance for all? Do we actually want life on pitiful dole/universal income (if not effective slavery) for ourselves and our children/grandchildren etc.? 

What will be the consequences of a gigantic unemployment and economic meltdown this time?

 

BTW this is a relatively benign scenario in the grand scheme of things.

 

The situation is a bit similar to being unjustly convicted by a grand jury composed of shifty billionaires and their minions, then sentenced to death.

People tend to have several years on the death row. Do we give up and do nothing, or try to save ourselves? Appeal, hire private investigators, raise public awareness, talk to the press, write a book, failing that try to escape, start a prison riot etc. The good news is that in this case the entire population would be awaiting the same fate (the degree of awareness might vary, but will increase as time progresses), so hopefully safety/power in numbers....

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

On 4/17/2023 at 12:58 AM, ryan said:

Yikes. Agreed

 

I'm sure they have fail safes installed? Do they? Any info

 

Like those exploding collars from "The Running Man" etc. Those will be for us bro, once we get locked up in human farms :)

A bit more seriously, they try to "align" the AI, so ChatGPT for example has undergone a PC training and it would be hard to get it to produce any non-PC answer.  Good luck with the more sophisticated models though.

 

  • Haha 1

"We're rushing towards a cliff, but the closer we get, the more scenic the views are."

-- Max Tegmark

 

AI: Major Emerging Existential Threat To Humanity

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use