Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,869
  • Joined

  • Days Won

    12

Etern4l last won the day on June 27 2023

Etern4l had the most liked content!

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Etern4l's Achievements

Veteran

Veteran (13/14)

  • One Year In
  • Posting Machine Rare
  • One Month Later
  • Very Popular Rare
  • Week One Done

Recent Badges

2.6k

Reputation

  1. No honour among thieves I guess.
  2. Bad Apple news today: Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims What does this have to do with AI? Well, turns out Jon Stewart decided to join the chorus of high profile humans speaking out about the risks and consequences of AI... and Apple didn't like that, since they are reportedly planning to jump on the LLM bandwagon. In other news: UK and US Sign Landmark Agreement On AI Safety Have not had a chance to scrutinise, but expecting max big tech sponsored BS along the lines of the below quotation: UK tech minister Michelle Donelan said it is "the defining technology challenge of our generation." "We have always been clear that ensuring the safe development of AI is a shared global issue," she said. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives." Lastly, the artists are fighting on: Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods
  3. What's the alternative though? Regarding the social and other media messaging: the split is roughly 50/50. I wonder if the omnipresent us vs them narrative is going to lead to a good outcome.... If things continue and escalate, could the worst happen? (prob not the way suggested in the trailer of that movie: Texas in alliance with California lol)
  4. Of course, although the phrase also makes a lot of literal sense when applied in reference to hedonistic behaviour. I don't know too many people that would say that the last thing they want to do before the world ends is buy some NVidia stock lol Well, this is going more mainstream now. -- BTW I'm not sure any current LLM chatbot could pass an informed Turing test by an interviewer who is aware of LLM's shortcomings. -- Meanwhile, uninformed government bureaucrats buckle under the FOMO pressure and embed the technology into the infrastructure: NYC's Government Chatbot Is Lying About City Laws and Regulations -- More good news from our "friends" at M$ and ClosedAI: Microsoft, OpenAI Plan $100 Billlion 'Stargate' AI Supercomputer I mean this is getting completely out of hand now. Remember any time you use the current data-harvesting M$ software and services, or otherwise give them any money: you are funding that. -- Compared to the above, this next piece of minor significance and par for the course (in part because the technology is out there already): OpenAI Reveals AI Tool To Recreate Human Voices So, Closed are "responsibly" releasing a tool which can clone human voice. What could go wrong? This stuff should be criminalised ASAP. Why isn't it, America? --- I saved this real gem for last: Larry Summers, Now an OpenAI Board Member, Thinks AI Could Replace 'Almost All' Forms of Labor So, Closed are getting bolder fast: previously they would publish those reports cautiously claiming about 20% or so of the jobs would be affected, but now they hired Larry Summers (a famous and outspoken economist) to sit pretty on the board and essentially profess the end of capitalism and human civilisation - clearly an excellent business opportunity in the era of AI-pumped disaster capitalism. Stay strong and keep thinking about the future guys, while we still can do something about it.
  5. For reference: Madoff got 150 years, although - granted for an even larger scam.
  6. Good old Patriot Act, and the government power creep just continues. Just like the criminals: they have the motive, you bet they have the means, and technology provides a sea of opportunity. Some sort of very significant change would be needed to stop all this, not clear who could deliver that though.
  7. Just to mention the obvious: the OS will automatically use all of it for disk caching, so any filesystem-heavy use case automatically benefits.
  8. There is certainly a FOMO aspect to that. BTW If there is no tomorrow than the value of NVidia stock should be close to $0. I don't think any adverse scenarios are factored into the pricing. This is not particularly new, but does dismantle the "AI safety guarantee" supposedly provided by containment of advanced AI models in big datacentres which can be powered off or nuked. BTW While we have seen Geoffrey's Hinton's talks here before, they are so good I would like to share another: a short lecture where he develops and delivers the fairly natural premises behind his AI safety concerns: Worth watching the whole thing, it's not overly technical, but fastforwarded to the punch line for those time-constrained.
  9. I guess there is some good news and a rare glimmer of hope: Public Trust In AI Is Sinking Across the Board "When it comes to AI regulation, the public's response is pretty clear: 'What regulation?'," said Edelman global technology chair Justin Westcott. "There's a clear and urgent call for regulators to meet the public's expectations head on." Not so great developments further on, sadly: Teachers Are Embracing ChatGPT-Powered Grading Spot on comment: "Pupils use ChatGPT to write the essay, then the teacher uses ChatGPT to grade it." Reddit Will Now Use an AI Model To Fight Harassment (androidauthority.com) Reddit decided to completely give in to the dark side: first they sold data to AI companies, now they will be running AI thought police on an already unusable echo chamber website.
  10. That is a bit of a strawman argument because computers can be made very reliable (much more so than a Windows PC!) - those being part of aircraft avionics or spacecraft would be two prime examples That said, AI systems are likely to suffer from of a completely new class of reliability issue, where software, which is "technically" working correctly, exhibits strange or unexpected behaviour which is not due to a bug, or hardware fault - kind of akin to human mental illness, but - instead of being relatively rare - most if not all LLM systems today suffer from this. Yes, the injustice in this that major AI creators and other filthy rich will be the last to potentially go - or perhaps that will be their punishment. We are unlikely to be the chosen ones to be there and find out. As for the fringe advocates, we have to be careful not to subvert the cause via silly publicity stunts. Much more of a fan of serious people doing the advocacy - naturally, they are much more likely to be taken seriously. To be fair, people suffer from the exact same problem. This is not easily solved, would require universal access to excellent unbiased education which is almost an oxymoron - and the terrible polarisation of our societies is one of the results. The difference is that every one of us is different, and it's not as easy to mass-produce biased humans, as it is to have an army of biased AIs (biased in whatever way the creator desires).
  11. Well, it's perhaps one of the most palpable and immediate concerns, however, there are many others, including: * Impact on jobs and broader economic consequences, such as increase in inequality * Concentration of power in the hands of big tech * Concerns around the impact on the human civilisation / society (e.g. the "Wall-E syndrome") * Military applications * Illicit use (e.g. hacking etc.) * Geopolitical risks * AGI / singularity amplifying the above and posing a direct existential risk Disinformation, manipulation, and misdirection are extremely powerful tools in advertising, politics etc. History is awash with examples of this, including the events leading up to WWII, as well as contemporary / current ones in quite a few Western countries (worryingly). The temporary saving grace is that, not unlike people, the technology is unsafe due to the unpredictability. An AI can literally go nuts... for now at least: ChatGPT goes temporarily “insane” with unexpected outputs, spooking users
  12. Cloudflare thinks it’s clear. Depot time?
  13. Pretty shameless and naive little AI promo clickbait attempt from M$N. “Actually good AI sided with a human” lol Principally, this just highlights the dangerous unpredictability of the technology, even if deployed in a commercially sensitive setting. Of course, M$N won’t be publicising any cases where the outcomes are not so beneficial to the clients.
  14. I don’t know if there is anyone who could single-handedly do anything about it. Has he broken any laws? No, I guess there are currently no US laws against creating a technology that could wipe us from the surface of the planet, or nuclear weapons would not have been created. Could POTUS shut ClosedAI down via an executive order on national security grounds? Perhaps, but he certainly won’t. The only hope is that the people wake up and start taking action against AI: raise the awareness, boycott the products and companies involved, and force their democratic representatives to act.
  15. i do wonder why those chillers (presumably) don’t have optional exhaust pipes, they are basically portable aircon units.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use