Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,896
  • Joined

  • Days Won

    13

Everything posted by Etern4l

  1. Sorry, I forgot you hate lawyers! Not a bad analogy. Just like we know AGI is coming. Will the higher ups do something about it this time?
  2. Of course. Automated factories=no workers, automated cash registers=no cashiers, automated call centers=no human advisors, self-driving cars=no human drivers, Sam Altman is saying OpenAI's job is to automate the job of an AI programmer, Zuckerberg is now talking about people talking to AI avatars rather than to other people, Hinton is talking about the benefits of talking to an AI doctor who has seen millions of patients instead of thousands etc. I think you are just being passive. It's like if Roosevelt said in 1941: "OK - they came out of nowhere, bombed the hell out of Pearl Harbor, and they have a huge Pacific fleet while ours has been decimated. Message received loud and clear, we will just start peace talks and hope they leave us alone as nothing can be done. It is what it is. Hey, how about we invest in their companies so we can actually make some money off this." In contrast, 60% of Americans recognize the dangers of AI, the EU regulators are up in arms, China is actually leading the regulation. We can do something about this, rather than leave it to Zuck, Altman, Hassabis, Satya Nadella, Elon Musk etc. But it will take serious effort and sacrifices. Edit: Some good ones from Twitter: LLMs can detect they are interacting with less sophisticated users, and give worse (less correct answers in that case). Anthropic found LLM base models give you worse answers if you use a prompt that implies you’re unskilled and unable to tell if the answer is right. To somehow end on a humorous note: “How could AGI outmaneuver humanity's corporations?”
  3. You forgot the part where it took over 300 years from the time the printing press was invented until the invention of the steam engine. BTW it was 1800 years before Guttenberg that Demosthenes noticed: A man is his own easiest dupe, for what he wishes to be true he generally believes to be true. Well, I'm the opposite on this: what I wish to be false, I believe to be true. The pace of change is rapidly accelerating, and all those past accomplishments of mankind had one thing in common - humans were always in charge of the technology. Humans were flying the aircraft, operating the still mills, and are now still running the aircraft carriers. This whole AI revolution has one grand goal in mind: get the humans out of the equation in the name of profit, power, and the short-sighted thrill of the intellectual challenge. While so far we have managed not to destroy ourselves and the planet while operating on that strange basis, you can only get lucky for so long. The question is what it means "to make the best out of a bad situation". People could shut all of this stuff down virtually overnight. Why don't they? Suffice it to look at some of the most popular videos on YouTube....
  4. Right, an influencer is sort of like a pro gamer - both are occupations that didn't exist 20 years ago in the current form. How many JayzTwoCents and GNs do people need? Probably one in 10,000 of those who try this make it big in both cases, and neither is a stable career path. Going further, ultimately it's just AV content. Is it that much of a stretch to imagine AI being able to generate perfect influencing podcast, video or stream? Will people want to watch the incredibly charismatic and attractive artificial characters showing them cool stuff? Well, the smartest ones won't, but that's going to be a dying breed if all goes to plan. Up to last year, the common advice given to people who were looking for a good career in tech would be "learn to code". Fewer people provide this advice now. Just think about it: if a single entity has the knowledge of a 1000 specialists, speaks all the human languages, can code in all computer languages (albeit not perfectly by any stretch and when working on small problems), pass the bar and MBA exams, what kind of job and how many jobs cannot be automated using sonething just a bit more advanced, maybe even not full AGI? Social backlash and harsh regulations are the only way to stop the destruction of middle and so called working classes (in that order, given that robotics is lagging a bit).
  5. Moreover the pump is quite slow, there is no way to see or control its RPM. and the backplate is small-ish (pre-LGA1700 design). I switched to a DeepCool 3000 RPM AIO and all is well (enough).
  6. Sounds like something a well-off boomer pensioner would worry about, I am making no assumptions and no offence if I scored an accidentally correct guess. Most people would actually be more concerned about things like the impact on jobs and the economy, or the future of their children and grandchildren. The claim that AI will create new jobs is a bit of an old trope, based on comparison to industrial revolution, which BTW resulted in decades of disruption and increased unemployment. This situation is qualitatively very different. Literally no AI cheerleader is able to point to any significant new source of high quality jobs for the displaced people, other than maybe care work (Microslop's CEO Satya Nadella was kind enough to suggest that in one interview). There may be some new jobs, but the question is about the balance of new to lost jobs, right? How many news jobs? How much will they pay? Will the incomes of people still in employment in "old jobs" go up or down, because AI is now taking over value generation? What's the distribution of wealth before and after, bearing in mind that relatively dumb automation has already lead to a vast increase of inequality? It's basically a slow reversal back to the feudal system. If the US loses say 25M jobs to automation (that would be a rather conservative estimate for this decade), and gains 5M, it won't be a success story, and the net effect on the stock market won't be great. Again, automation has sadly led to an increase in inequality, and it's highly likely that billionaires will be better off in the short to medium term, unless taxes on them and big tech end up going up to cover wellfare/UBI and ultimately capitalism as we know it breaks down. One suggestion I heard from an ex-Google exec is that something like 95% tax will need to be levied on AI companies to cover the impact of automation. That might work but it would just enable a UBI dystopia. In summary, we need to be able to think a few steps ahead. What if an AGI/ASI actually gets developed, and starts powering things like Boston Dynamics Atlas, or Elon Musk's Optimus humanoid robot. It's game over for vast swathes of human jobs then. Sam Altman's Worldcoin crypto and Universal Basic Pittance of an Income to the rescue I guess, I'm sure that would fly without a hitch in the US.
  7. Producing deepfakes is already illegal in some jurisdictions. I don't think such images have any value as an instrument of extortion. Most people either do or will soon understand that AI can fake virtually any image, and therefore a random picture of unknown origin has almost zero informational value.
  8. OK, so C7 is a bit of an older model - 2017 or before. Plus that usage was fairly taxing on your OLED lol My C9 detects static-ish content and dims the screen after a while. Complete static content triggers a full screen saver. All that said, OLEDs are not ideal for 24/7 with static content as they run anti-burn routines while on standby. I'm sure it doesn't apply in your case, but in general it's very important to keep them powered on even when "turned off".
  9. Wow, that's quite unfortunate. Literally first time I've seen anyone one NBR/NBT report OLED issues, going back to Alienware 13 OLED. What displays were those? Zero issues with my LG OLED TV after 3.5 years, minimal gaming, but lots of YT, Spotify and TV with static logos.
  10. In the very short term unless a miracle happens and the research stalls. Extremely unlikely in my view. Further down the road, it's not difficult to see that things can (and given the current direction of travel - likely will) get out of hand if people don't put aside their differences and start working together on this man-made problem. I mean, Sam Altman, the very person in charge of the most advanced AI model at the moment talks about the fall of capitalism. He is also thinking ahead and just started Worldcoin, a new crypto project that aims to scan people's retinas and help administer universal basic income.... Forum rules prohibit the correct written reaction here.
  11. Incidentally, that's an interesting parable. One of the main reasons for dodos going extinct was their complete fearlessness of the "biological superintelligence" (in relative terms) which all of sudden appeared in their habitat: humans.
  12. I saw that earlier. That's an easy one. On one end Hawking, Turing himself, living AI Turing award winners, heads of big tech - all have voiced concerns in an unprecedented manner. On the other hand miCrosoftNBC wheeled out a bit of shifty VC billionaire guy with what looks like zero AI background to opine to the contrary against some of the humanity's greatest minds... VC are pouring in money into startups, and guys like him must really hate the negative publicity. Although it's a bit of waste of time (on the guy, not our dear readers), I will quickly address a couple of his simplistic/intentionally misleading points highlighted in the biased article: 1. A.I. doomers are a ‘cult’ Well, AGI researchers are more like an arcane cult (0.01% of the population? less?). Moreover, he is closely associated with Meta, which is a genuine cult. I know a guy who joined as a senior SWE, the key part of the interview was about alignment with Meta values set down by Dear Leader. Anyway, the title is an 'ad hominem' - a method of choice of simpletons and populists, hence I am not going to respond in kind by calling Andreessen a cultist. 2. Andreessen starts off with an accurate take on AI, or machine learning, calling it “the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.” Actually, it's not that accurate. In particular, there is no prerogative for AI to be implemented in software, and he (or the journalist) doesn't make a distinction between ANI/AGI/ASI etc. 3. AI isn’t sentient, he says Which means absolutely nothing. We don't even know how to define sentience, never mind how it might work in humans. It's not clear that AI has any need for human-like sentience. 4. “AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive,” he wrote. “And AI is a machine – is not going to come alive any more than your toaster will.” Actually, that's just a bare faced misrepresentation. Goal driven AI systems have existed for a long time, and the latest ones are using LLMs. Etc. I could go on for another 10 points. Basically, I haven't spotted a single sensible statement. Feel free to point out if anything that looks like a kernel of wisdom caught your eye there. He is basically worried that regulation would kill off his AI startups, that's all. As for the need for regulation, do we want unregulated open source bioweapons, nuclear weapons etc. Of course not - same thing with AGI. True the US gov hasn't always been the source of inspiration on how to implement effective regulation (e.g. on data protection), but well some regulation is better than none at all, even if the risk is as "small" as 5-10%. In contrast, this is what a proper argument looks like (by Yoshua Bengio): How Rogue AIs may Arise The problem is that the formalism is not exactly very accessible, but at least the executive summary is straightforward. I'm not sure what Yoshua Bengio's net worth is, and whether he would be able to afford the services of a CNBC journalist with the same ease a billionaire can. That didn't look like even GPT 1, but let's congratulate @ryan on achieving intellectual supremacy over whatever that represented... must have been therapeutic, humour often is. As for the actual GPT 4 in action: Healthcare Org With Over 100 Clinics Uses OpenAI's GPT-4 To Write Medical Records Nvidia’s GPT-4 powered bot in Minecraft outperforms all other AI agents To clarify, this is not to say Skynet will be born out of Minecraft, but rather to highlight how powerful and versatile the model is, specifically in application to goal-driven tasks (which the crooked Andeersson, or his miCrosoftNBC affiliate journalist, claims AI cannot tackle lol).
  13. Good point, although the question is whether taking breaks every 30 min would not address this. I'm sure Meta and Apple have conducted the relevant health&safety studies already ;)
  14. I'd like to believe it's not black and white, but it's certainly a minefield out there. Edit: A rare optimistic example to prove there is still some hope left: https://www.notebookcheck.net/System76-to-disable-Intel-Management-Engine-on-its-notebooks.267696.0.html
  15. Well, the Apple headset is a VR/AR hybrid. Obviously having the functionality it offers (4K+eye tracking+hand tracking) in a sunglasses format would be ideal, but is clearly impossible at this point. The Nreal Light glasses are kind of getting closer, but still looks like much lower resolution and no eye tracking. Usability probably wouldn't be there, especially compared to the expected quality of the experience on the Apple headset (if they can do one thing, it's smooth user interfaces). As for the new Apple headset, pricing aside, it really depends on how comfortable the headset is. I could do some serious hours in the Rift. Edit: even 4K AR/VR probably wouldn't be enough to come close to the experience of interacting with a 4K monitor due to the different distances involved, would need 8K.
  16. I only used the original Rift, the res wasn't high enough for productivity (or frankly movies), but I could kind of see it if the resolution was much higher. You can just conjure up multiple monitors and whatever space you desire, wherever you are. Apple Vision Pro def targets productivity. 4K resolution per eye, seamless AR, and no need for controllers that would get in the way of using the keyboard. We'll hopefully see what it looks like in practice next year. Apparently it's a bit heavy, due to the front screen and being made of metal.
  17. Anyone explored ditching their monitors for the latest VR headsets (Quest 2/Pro I guess) for productivity? Looks like Quest 2 pro is not quite there yet, perhaps Quest 3 or Apple Vision Pro at $3500 will do the trick :)
  18. Influencing is easy bro: "User error". Just need to leave your ethics at the door.
  19. Asus obviously have been working hard on the marketing side of things. Sponsored reviews aside, another upshot of that effort is the operation ProArt where overpriced and/or sub-standard relative to price components are being punted to "creators" at a healthy premium (my favourite examples of that include the $5000k ProArt PA32UCG monitor, as well as the ProArt motherboards). The underlying assumption here is that "creators" tend to be more on the technically naive side, and will fall for the marketing (incl., as we now know, the "reviews") hook, line and sinker. What's particularly clever about that is that they will soon just rename the line to PromptArt, and sort of maintain brand continuity, while seamlessly scaling the components down further as prompting cloud AI services like Midjourney to create pretty much any (static, as of the current version) visual art one might fancy doesn't actually require any significant local compute (or creative skills for that matter).
  20. Continuing my response to @Aaron44126. 3. I don't want to go too much into my background but it spans CS and DS. University work notwithstanding, I have been building ML models for several years now. I am reasonably familiar with most areas relevant to LLMs. The field has changed dramatically since my simple university work on NNs (from first principles of course) back when neither the compute nor the methods were there. It's a different game to what it was 15 years ago, not to mention further back. The theory has advanced, and the complexity of the models is absolutely staggering. Just going back briefly to .2: nobody has any idea precise enough to be useful about what's going on inside an LLM, the humongous models are inscrutable. It should be clearer just by looking at the number of parameters, but also by listening to leading experts in the field. If some of @Aaron44126's colleagues understand what's going on inside LLMs, more power to them - well deserved Turing awards all around, and I hope to see their groundbreaking work published in Nature shortly. More realistically though, some experts estimate that if we stopped right now, we might reach good understanding of what's going on inside a 1T parameter LLM in a couple of decades. Probably more on the conservative side, but the challenge is serious. We've created an artificial super-brain in pursuit of profit, don't really understand what's going on inside, but yet we've unleashed it upon the world. 4. "The bulk of advancement came from scaling" I am pretty sure world-class (not to mention less distinguished) experts would concur that the large amount of progress on the algorithmic side is what enabled the scaling. Put another way, in absence of these advancements LLMs wouldn't happen on the current hardware. ChatGPT would not happen without significant innovation on the training methodology side either. Now, some people might say - we won't progress much further towards AGI without quantum computing. Well, to that I would say we are still far away from reaching the limit of what's possible using the current deterministic computing hardware. Time will soon tell. 5. "The AI has made mistakes" Of course it has. We say, to err is human, and machine learning adopts this to the extreme during its learning process. The fundamental issue is that when applied to a large complicated problem space, we never know if we have trained the machine well enough, and whether it will deal well with unexpected situations. Of course, we don't know that about people either (although the particular case was a preventable mistake in mental healthcare). 6. "The AI has limitations" Again, of course it has. We probably wouldn't be talking if it didn't. What is concerning is the rate of progress: 5 years ago - idiot 2 years ago - 5 y.o. 6 months ago - GPT 3.5 - struggles on a bar exam (as an example) 2 months ago - GPT 4 - IQ 130 (non-verbal) / 155 (verbal - 99.9%ile), aces that bar exam One of the problems for us in comprehending this is people in general tend to have a hard time spotting and extrapolating exponential trends. I will repost this again after editing out the forbidden word: Even if it does make some mistakes, you have to look at the breadth of knowledge it possesses. A single entity basically knows everything we do up to a certain (very considerable) depth. An average person may have read 100 books, GPT probably read the equivalent of a million and can do pretty amazing stuff with that knowledge, indicating a good level of emergent understanding. Just to be clear, the mechanism of that understanding is different to ours, but ultimately what matters is performance. 7. The drone simulation debacle Let's just start by getting as close to the source as possible: https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ Now let's ignore the PR/denials by the military, and take a look at the creds of the speaker: However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal. So, what did he say: He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton. To me this is perfectly sensible recount of what could have been observed during training of a system which has no innate morals or values. You have to give it negative points for killing the operator, destroying the comms, and potentially a myriad of other bad behaviours you may or may not be able to envision upfront. There is also a separate very serious problem he did not mention at all, namely the limitation of simulation. In cases where the cost of real-life mistakes is low, all that might be acceptable, but certainly not in military applications. The system briefly described seems too primitive to me to be fit for purpose. Now the problem of course is what if the system is very sophisticated - can you actually have any hope of reliably testing it. The answer today is clearly: no. Of course, the fundamental issue here is the military playing around with autonomous weapons in the first place (there have been reports of that before). This should be banned at the international level ASAP and enforced while we still can. Edit: Some people would say, "yebbut China". China what? The threat posed by China, if any, is insignificant compared to to the global threat posed by AGI. We are all in the same boat. People tend to rally together against a common threat, and AGI is just that. Instead of yelling "China! China!", we should initiate bi- and multilateral talks on this ASAP, in the hope that the world can find a common solution, and fast - otherwise we are facing state-funded universal basic income dystopia, if not outright Armageddon.
  21. Jensen's greed is killing the PC market. Meanwhile, it turns out that building water-hungry chip factories in the desert can lead to water shortages (for the population rather than the tech giants): Arizona Limits Construction Around Phoenix as Its Water Supply Dwindles Edit: at least Intel claims they will offset their water usage via reclamation projects, TSMC could be another story.
  22. Thanks, your post explains a lot. I think we have quite a few of misunderstandings to clear up. I will go ahead and itemize them, probably breaking up the response since there is a lot to go through: 1. There seems to be a lack of clarity over the term "intelligence" itself A fundamental problem in defining intelligence is that we don't really understand how our own intelligence works. This opens the possibility of endless arguments about the semantics of the very term, however, I would suggest that for the sake of being able to conduct a constructive discussion we settle on the industry standard terminology. I can sympathize with your point BTW, as for a while I had issues with the current ubiquitous usage of the term AI, referring to a property of an agent solving basically any task that a human would require a "reasonable level of intelligence" to deal with, regardless of how the computer deals with the problem. For the better or worse though, the industry uses the umbrella term AI for systems capable of exhibiting any "intelligence" understood as above. That said, these days the following taxonomy is coming to the fore: * ANI (A. Narrow I.) - most of the systems today, still including ChatGPT, as well as image generation and voice replacement These systems can bring about some benefits if deployed in a regulated and responsible manner, and pose less of a direct existential risk (primarily through some irresponsible or malicious use). They can still pose indirect risks through effects like sizable unemployment, fake news proliferation that will likely continue to disrupt democracy. All very serious immediate concerns. In a larger forum would warrant a separate thread. * AGI (A. General I.) - human-level fluid and general intelligence. Basically it's the ability of the agent to operate autonomously and solve a broad range of tasks. A few years ago considered to be "decades away", now Hinton estimates we could see one as soon as in 5 years. We don't have much time, and once that genie is out of the bottle it will be too late to act. For instance, once developed, and unless the cost to operate them is prohibitive, these systems would take the disruption in the labour market to unimaginable levels. In the words of Sam Altman, "the marginal cost of intelligence will go to zero". This thread is primarily concerned with the prospect of these systems being developed in the immediate future. AGI systems are being worked on primarily at 3 sites that we know: - OpenAI (effectively the AGI research arm of Microsoft) in SF - basically their mission statement - DeepMind (AGI research arm of Google) in London - ditto - Tesla (forgot where in Cali it's based) - kind of really required to have a foolproof FSD and a couple of others. - Likely Meta NVidia itself is a major contributor. One distinguishing feature of AGI research is that requires enormous compute resources, and NVidia's current mission statement is to address that problem, but they also do a lot of fundamental research. Arguably the most prominent voices expressing concern about AGI: * Geoffrey Hinton - probably the most prominent godfather of deep learning (several interviews posted in this thread earlier, worth a watch in detail) * Yoshua Bengio - likewise a very well known academic in the field, as a quick google will reveal in case someone is not familiar - an interview has been posted earlier * Eliezer Yudkowski - AI researcher, founder of MIRI * Late Stephen Hawking * Yuval Noah Harari - historian, futurist and a prominent author * Evolutionary biologist Richard Dawkins - although his take is a bit perverse and resigned: "oh well, we are creating a species which will supersede us, that's life" * Max Tegmark - ML researcher and professor at MIT * Elon Musk lol - well, it's good that he did go public with his thoughts on this, but the warnings have certainly rung hollow since founded OpenAI and has been progressing AGI research at Tesla, and is now opening a dedicated AGI company to rival OpenAI/DeepMind * In fact Sam Altman, and Demis Hassabis themselves, have recently voiced concerns and called for regulation, although the rationale is not entirely clear given that they run the two leadings AI labs. Furthermore, hundreds of researchers (and Elon Musk again, not obvious if that helps the cause lol) have signed a letter calling for a pause on AGI research. Anyone not armed with deep knowledge about SotA AI models, would do well to pause and reflect on the above. I have posted interviews of some of those individuals, would definitely encourage people to watch. Obviously, the fact that a prominent, or in fact any member of the AI community is expressing safety concerns is very significant, given the personal cost to the person. We haven't seen tobacco industry professionals voicing concerns about lung cancer, or DuPont insiders warnings about PFOA and other forever-chemicals - those would be career-ending moves. That threat really needs to be severe for people to risk it all. People have probably heard about the string of ethics/safety-related engineers and researchers getting fired by big tech on that basis, some examples below, all at the once "don't be evil" Google: * Blake Lemoine - claimed an AI seemed sentient * Timnit Gebru - sounded warnings about LLMs * Margaret Mitchell - a colleague of Gebru Clearly those big tech ethics departments are not independent, and basically whitewash generators at best. As expected though, the typical official/public views on AI coming from AI professionals (especially from people who are not operating at the forefront of deep thinking about the future of these models, and who are just motivated financially - frankly, the majority of practitioners, no offense) can be characterised as blatant marketing with rose-tinted glasses on. Even so, 50% of the heavily biased AI researchers estimate the risk of extinction from AI at 5-10%. That's staggeringly high. Most people would never go onboard an airplane if the risk of a hull-loss level crash was that high. 2. The notion that we have a good understanding how it works This is a quick one: actually we don't. Explainability of deep learning models is a serious challenge even in the case of simpler models. We have some general idea of how these models work, but not in-depth understanding of what's going on inside models with 50B-1T+ parameters. For comparison, a human brain contains around 100T "parameters". I have to stop here. To summarise this part, I would suggest we should suspend our distrust of experts (if any, but the risk is there given the populist assaults on science), refrain from formulating strong views based on zero to perfunctory or out of date understanding of what's going on in this complicated and rapidly evolving field, and listen carefully to what the prominent individuals mentioned above have to say. In the next part I will shed a bit of light on my background and cover some of the more technical points raised. BTW I would once again like to thank bro @Reciever for maintaining good order while we go through these potentially challenging proceedings.
  23. Beat me to it. Good old 2019... Again, a TED talk by a lady who was then in a weird business of documenting weirdness of fledgling AI (based on examples coded by middle school students, no offense, no doubt very talented kids). Not saying there is none to document these days, but in most areas things are on a different level, e.g. here we can watch endless videos of a dude driving Tesla with FSD. Weirdness is like "wow, that left turn, I would have waited but it worked well - this version is definitely more confident". Uber/taxi/van/truck drivers are watching this stuff with a fair amount of anxiety I imagine. Another example earlier by @Aaron44126: "wow, it didn't solve that logic problem without help". Just give the good folks at OpenAI/DeepMind/Meta/NVidia/MuskCorp another year or two....
  24. You can imagine senior military personnel being surprised. If you put a human test pilot in an aircraft, virtually in no circumstances would you expect the result to be deliberate friendly fire, and of course the contractor themselves clearly wouldn't have advertised that as a possibility (even in a simulation). Yet, of course it could be a perfectly rational thing for the AI to do. You can try to work around this, but fundamentally it's an alien intelligence. It has no morals or other human values, no empathy, and ultimately limited what we would call "human common sense" at best. I know it's a bit of a distraction, but being a bit of a movie buff, I am failing to resist the urge to share a classic scene which comes to mind here lol Yeah, this might soon warrant a separate thread. Looking at the comments, people try to console themselves with the imperfections of ChatGPT etc. (probably based on the initial/free version), but of course we should expect these to be ironed out very soon indeed, if they haven't been already. People will try to run away to manual labour, but there are only so many dog walkers, plumbers, and HVAC engineers the world needs (and Elon is hard at work trying to progress robotics and commence an assault on that front). A good point was made by Yuval Harari in one of the videos earlier: only a few years of ca. 20% unemployment was enough for a well known populist movement to grab power, which lead to WWII. Edit: in response to @Aaron44126, welcome back to the thread :) As mentioned earlier, I don't think the story is bogus. An army colonel stood in front of a sizable audience and shared it, and it makes perfect sense - he was just describing some behaviour observed during training/testing. Would the military have confirmed it? Of course not, and now it's PR damage control, especially in light of the talk about AI regulation. I mean the current progress in AI would be considered sci-fi not long ago, certainly 10-20 years back. What holes do you see? Better than no education for sure, but potentially really dangerous. One Valley company in control of the system educating billions in third-world countries. Hmm. If anything, I am personally more excited by the potential of AI to help improve healthcare, e.g. diagnostics https://www.theguardian.com/society/2023/apr/30/artificial-intelligence-tool-identify-cancer-ai This is narrow AI tech which is relatively harmless apart from maybe through some impact on radiologists as a profession (hopefully there will always be people sanity checking stuff at least). Haven't watched end-end, but couple of problems there. The issue is the rate of improvement in capability, including reasoning. Second, I tried a few of those with 3.5 and was able to get the right answers with follow up queries. I mean it's well known that ChatGPT is still a bit of an idiot savant. Check out the story I posted earlier about a law firm getting sanctioned by the court for submitting a ChatGPT-generated filing containing made-up case references. Furthermore, comparisons to human thinking are a bit pointless because: a) We don't really understand how our brain works b) AI will almost necessarily be "thinking" very differently to us - how it does it doesn't really matter in the end if it continues to gain function at this staggering pace Remember that folks appearing in TED talks are usually advertising something (in that case an alternative approach to AI). Yes, so as far as the public knowledge goes, we are not facing an immediate and direct existential threat from AI, however, we are clearly on course. Hopefully there is still enough time to prevent it, but probably not much. However, as mentioned above, the immediate issues such as the impending significant unemployment (which is almost a foregone conclusion now) or breakdown of democracy can very well lead to death and destruction, vide 1930s Europe and then the world.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use