Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,876
  • Joined

  • Days Won

    13

Everything posted by Etern4l

  1. Continuing my response to @Aaron44126. 3. I don't want to go too much into my background but it spans CS and DS. University work notwithstanding, I have been building ML models for several years now. I am reasonably familiar with most areas relevant to LLMs. The field has changed dramatically since my simple university work on NNs (from first principles of course) back when neither the compute nor the methods were there. It's a different game to what it was 15 years ago, not to mention further back. The theory has advanced, and the complexity of the models is absolutely staggering. Just going back briefly to .2: nobody has any idea precise enough to be useful about what's going on inside an LLM, the humongous models are inscrutable. It should be clearer just by looking at the number of parameters, but also by listening to leading experts in the field. If some of @Aaron44126's colleagues understand what's going on inside LLMs, more power to them - well deserved Turing awards all around, and I hope to see their groundbreaking work published in Nature shortly. More realistically though, some experts estimate that if we stopped right now, we might reach good understanding of what's going on inside a 1T parameter LLM in a couple of decades. Probably more on the conservative side, but the challenge is serious. We've created an artificial super-brain in pursuit of profit, don't really understand what's going on inside, but yet we've unleashed it upon the world. 4. "The bulk of advancement came from scaling" I am pretty sure world-class (not to mention less distinguished) experts would concur that the large amount of progress on the algorithmic side is what enabled the scaling. Put another way, in absence of these advancements LLMs wouldn't happen on the current hardware. ChatGPT would not happen without significant innovation on the training methodology side either. Now, some people might say - we won't progress much further towards AGI without quantum computing. Well, to that I would say we are still far away from reaching the limit of what's possible using the current deterministic computing hardware. Time will soon tell. 5. "The AI has made mistakes" Of course it has. We say, to err is human, and machine learning adopts this to the extreme during its learning process. The fundamental issue is that when applied to a large complicated problem space, we never know if we have trained the machine well enough, and whether it will deal well with unexpected situations. Of course, we don't know that about people either (although the particular case was a preventable mistake in mental healthcare). 6. "The AI has limitations" Again, of course it has. We probably wouldn't be talking if it didn't. What is concerning is the rate of progress: 5 years ago - idiot 2 years ago - 5 y.o. 6 months ago - GPT 3.5 - struggles on a bar exam (as an example) 2 months ago - GPT 4 - IQ 130 (non-verbal) / 155 (verbal - 99.9%ile), aces that bar exam One of the problems for us in comprehending this is people in general tend to have a hard time spotting and extrapolating exponential trends. I will repost this again after editing out the forbidden word: Even if it does make some mistakes, you have to look at the breadth of knowledge it possesses. A single entity basically knows everything we do up to a certain (very considerable) depth. An average person may have read 100 books, GPT probably read the equivalent of a million and can do pretty amazing stuff with that knowledge, indicating a good level of emergent understanding. Just to be clear, the mechanism of that understanding is different to ours, but ultimately what matters is performance. 7. The drone simulation debacle Let's just start by getting as close to the source as possible: https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ Now let's ignore the PR/denials by the military, and take a look at the creds of the speaker: However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal. So, what did he say: He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton. To me this is perfectly sensible recount of what could have been observed during training of a system which has no innate morals or values. You have to give it negative points for killing the operator, destroying the comms, and potentially a myriad of other bad behaviours you may or may not be able to envision upfront. There is also a separate very serious problem he did not mention at all, namely the limitation of simulation. In cases where the cost of real-life mistakes is low, all that might be acceptable, but certainly not in military applications. The system briefly described seems too primitive to me to be fit for purpose. Now the problem of course is what if the system is very sophisticated - can you actually have any hope of reliably testing it. The answer today is clearly: no. Of course, the fundamental issue here is the military playing around with autonomous weapons in the first place (there have been reports of that before). This should be banned at the international level ASAP and enforced while we still can. Edit: Some people would say, "yebbut China". China what? The threat posed by China, if any, is insignificant compared to to the global threat posed by AGI. We are all in the same boat. People tend to rally together against a common threat, and AGI is just that. Instead of yelling "China! China!", we should initiate bi- and multilateral talks on this ASAP, in the hope that the world can find a common solution, and fast - otherwise we are facing state-funded universal basic income dystopia, if not outright Armageddon.
  2. Jensen's greed is killing the PC market. Meanwhile, it turns out that building water-hungry chip factories in the desert can lead to water shortages (for the population rather than the tech giants): Arizona Limits Construction Around Phoenix as Its Water Supply Dwindles Edit: at least Intel claims they will offset their water usage via reclamation projects, TSMC could be another story.
  3. Thanks, your post explains a lot. I think we have quite a few of misunderstandings to clear up. I will go ahead and itemize them, probably breaking up the response since there is a lot to go through: 1. There seems to be a lack of clarity over the term "intelligence" itself A fundamental problem in defining intelligence is that we don't really understand how our own intelligence works. This opens the possibility of endless arguments about the semantics of the very term, however, I would suggest that for the sake of being able to conduct a constructive discussion we settle on the industry standard terminology. I can sympathize with your point BTW, as for a while I had issues with the current ubiquitous usage of the term AI, referring to a property of an agent solving basically any task that a human would require a "reasonable level of intelligence" to deal with, regardless of how the computer deals with the problem. For the better or worse though, the industry uses the umbrella term AI for systems capable of exhibiting any "intelligence" understood as above. That said, these days the following taxonomy is coming to the fore: * ANI (A. Narrow I.) - most of the systems today, still including ChatGPT, as well as image generation and voice replacement These systems can bring about some benefits if deployed in a regulated and responsible manner, and pose less of a direct existential risk (primarily through some irresponsible or malicious use). They can still pose indirect risks through effects like sizable unemployment, fake news proliferation that will likely continue to disrupt democracy. All very serious immediate concerns. In a larger forum would warrant a separate thread. * AGI (A. General I.) - human-level fluid and general intelligence. Basically it's the ability of the agent to operate autonomously and solve a broad range of tasks. A few years ago considered to be "decades away", now Hinton estimates we could see one as soon as in 5 years. We don't have much time, and once that genie is out of the bottle it will be too late to act. For instance, once developed, and unless the cost to operate them is prohibitive, these systems would take the disruption in the labour market to unimaginable levels. In the words of Sam Altman, "the marginal cost of intelligence will go to zero". This thread is primarily concerned with the prospect of these systems being developed in the immediate future. AGI systems are being worked on primarily at 3 sites that we know: - OpenAI (effectively the AGI research arm of Microsoft) in SF - basically their mission statement - DeepMind (AGI research arm of Google) in London - ditto - Tesla (forgot where in Cali it's based) - kind of really required to have a foolproof FSD and a couple of others. - Likely Meta NVidia itself is a major contributor. One distinguishing feature of AGI research is that requires enormous compute resources, and NVidia's current mission statement is to address that problem, but they also do a lot of fundamental research. Arguably the most prominent voices expressing concern about AGI: * Geoffrey Hinton - probably the most prominent godfather of deep learning (several interviews posted in this thread earlier, worth a watch in detail) * Yoshua Bengio - likewise a very well known academic in the field, as a quick google will reveal in case someone is not familiar - an interview has been posted earlier * Eliezer Yudkowski - AI researcher, founder of MIRI * Late Stephen Hawking * Yuval Noah Harari - historian, futurist and a prominent author * Evolutionary biologist Richard Dawkins - although his take is a bit perverse and resigned: "oh well, we are creating a species which will supersede us, that's life" * Max Tegmark - ML researcher and professor at MIT * Elon Musk lol - well, it's good that he did go public with his thoughts on this, but the warnings have certainly rung hollow since founded OpenAI and has been progressing AGI research at Tesla, and is now opening a dedicated AGI company to rival OpenAI/DeepMind * In fact Sam Altman, and Demis Hassabis themselves, have recently voiced concerns and called for regulation, although the rationale is not entirely clear given that they run the two leadings AI labs. Furthermore, hundreds of researchers (and Elon Musk again, not obvious if that helps the cause lol) have signed a letter calling for a pause on AGI research. Anyone not armed with deep knowledge about SotA AI models, would do well to pause and reflect on the above. I have posted interviews of some of those individuals, would definitely encourage people to watch. Obviously, the fact that a prominent, or in fact any member of the AI community is expressing safety concerns is very significant, given the personal cost to the person. We haven't seen tobacco industry professionals voicing concerns about lung cancer, or DuPont insiders warnings about PFOA and other forever-chemicals - those would be career-ending moves. That threat really needs to be severe for people to risk it all. People have probably heard about the string of ethics/safety-related engineers and researchers getting fired by big tech on that basis, some examples below, all at the once "don't be evil" Google: * Blake Lemoine - claimed an AI seemed sentient * Timnit Gebru - sounded warnings about LLMs * Margaret Mitchell - a colleague of Gebru Clearly those big tech ethics departments are not independent, and basically whitewash generators at best. As expected though, the typical official/public views on AI coming from AI professionals (especially from people who are not operating at the forefront of deep thinking about the future of these models, and who are just motivated financially - frankly, the majority of practitioners, no offense) can be characterised as blatant marketing with rose-tinted glasses on. Even so, 50% of the heavily biased AI researchers estimate the risk of extinction from AI at 5-10%. That's staggeringly high. Most people would never go onboard an airplane if the risk of a hull-loss level crash was that high. 2. The notion that we have a good understanding how it works This is a quick one: actually we don't. Explainability of deep learning models is a serious challenge even in the case of simpler models. We have some general idea of how these models work, but not in-depth understanding of what's going on inside models with 50B-1T+ parameters. For comparison, a human brain contains around 100T "parameters". I have to stop here. To summarise this part, I would suggest we should suspend our distrust of experts (if any, but the risk is there given the populist assaults on science), refrain from formulating strong views based on zero to perfunctory or out of date understanding of what's going on in this complicated and rapidly evolving field, and listen carefully to what the prominent individuals mentioned above have to say. In the next part I will shed a bit of light on my background and cover some of the more technical points raised. BTW I would once again like to thank bro @Reciever for maintaining good order while we go through these potentially challenging proceedings.
  4. Beat me to it. Good old 2019... Again, a TED talk by a lady who was then in a weird business of documenting weirdness of fledgling AI (based on examples coded by middle school students, no offense, no doubt very talented kids). Not saying there is none to document these days, but in most areas things are on a different level, e.g. here we can watch endless videos of a dude driving Tesla with FSD. Weirdness is like "wow, that left turn, I would have waited but it worked well - this version is definitely more confident". Uber/taxi/van/truck drivers are watching this stuff with a fair amount of anxiety I imagine. Another example earlier by @Aaron44126: "wow, it didn't solve that logic problem without help". Just give the good folks at OpenAI/DeepMind/Meta/NVidia/MuskCorp another year or two....
  5. You can imagine senior military personnel being surprised. If you put a human test pilot in an aircraft, virtually in no circumstances would you expect the result to be deliberate friendly fire, and of course the contractor themselves clearly wouldn't have advertised that as a possibility (even in a simulation). Yet, of course it could be a perfectly rational thing for the AI to do. You can try to work around this, but fundamentally it's an alien intelligence. It has no morals or other human values, no empathy, and ultimately limited what we would call "human common sense" at best. I know it's a bit of a distraction, but being a bit of a movie buff, I am failing to resist the urge to share a classic scene which comes to mind here lol Yeah, this might soon warrant a separate thread. Looking at the comments, people try to console themselves with the imperfections of ChatGPT etc. (probably based on the initial/free version), but of course we should expect these to be ironed out very soon indeed, if they haven't been already. People will try to run away to manual labour, but there are only so many dog walkers, plumbers, and HVAC engineers the world needs (and Elon is hard at work trying to progress robotics and commence an assault on that front). A good point was made by Yuval Harari in one of the videos earlier: only a few years of ca. 20% unemployment was enough for a well known populist movement to grab power, which lead to WWII. Edit: in response to @Aaron44126, welcome back to the thread :) As mentioned earlier, I don't think the story is bogus. An army colonel stood in front of a sizable audience and shared it, and it makes perfect sense - he was just describing some behaviour observed during training/testing. Would the military have confirmed it? Of course not, and now it's PR damage control, especially in light of the talk about AI regulation. I mean the current progress in AI would be considered sci-fi not long ago, certainly 10-20 years back. What holes do you see? Better than no education for sure, but potentially really dangerous. One Valley company in control of the system educating billions in third-world countries. Hmm. If anything, I am personally more excited by the potential of AI to help improve healthcare, e.g. diagnostics https://www.theguardian.com/society/2023/apr/30/artificial-intelligence-tool-identify-cancer-ai This is narrow AI tech which is relatively harmless apart from maybe through some impact on radiologists as a profession (hopefully there will always be people sanity checking stuff at least). Haven't watched end-end, but couple of problems there. The issue is the rate of improvement in capability, including reasoning. Second, I tried a few of those with 3.5 and was able to get the right answers with follow up queries. I mean it's well known that ChatGPT is still a bit of an idiot savant. Check out the story I posted earlier about a law firm getting sanctioned by the court for submitting a ChatGPT-generated filing containing made-up case references. Furthermore, comparisons to human thinking are a bit pointless because: a) We don't really understand how our brain works b) AI will almost necessarily be "thinking" very differently to us - how it does it doesn't really matter in the end if it continues to gain function at this staggering pace Remember that folks appearing in TED talks are usually advertising something (in that case an alternative approach to AI). Yes, so as far as the public knowledge goes, we are not facing an immediate and direct existential threat from AI, however, we are clearly on course. Hopefully there is still enough time to prevent it, but probably not much. However, as mentioned above, the immediate issues such as the impending significant unemployment (which is almost a foregone conclusion now) or breakdown of democracy can very well lead to death and destruction, vide 1930s Europe and then the world.
  6. Lol, the military usually tell the truth about presumably classified highly controversial operations, don't they? Dude please change the avatar, for Albert's memory's sake... The guy said too much in good faith (wanted to highlight the unexpected behaviour AI is capable of), received a reprimand, backtracked, and a spokesman issued a lame statement of denial nobody with at least half a brain would believe. Furthermore, the behaviour he described is exactly what I would expect in absence of sufficient guardrails, especially in some early testing. A side-point here is that the US military are working on autonomous drones, which has been known for a while.
  7. AI-controlled US military drone ‘kills’ its operator in simulated test In the exercise, AI used creative strategies to set itself free from control, such as destroying comms tower and killing the operator... Meanwhile, Joshua Bengio, one of the godfathers of AI, joins the chorus of experts warning us about the existential and other dangers of AI:
  8. It's more than just reputation. They have their own fabs, unlike both AMD and Nvidia. They now have a MVP that sells and people seem to like in the 770 GPU. They have also done a good amount of R&D in that area, going back to Larrabee. They have been historically strong in scientific software, and I can actually see Intel GPU support becoming available there as well. They also have the fastest Linux distro. Last but not least they have a CEO that's not a total weirdo. What they need now is a competitvely priced 24GB+ model which punches at least somewhere between 3090 Ti and 4090.
  9. Great scenario, hope they can work it out, only beaten by the one in which there are plenty of Intel GPUs at launch which beat Nvidia ones. :)
  10. True enough, although Chandler is part of Phoenix metro area, 5M people :) I dare say, the huge impact comes from provision of chip security. Imagine the impact on jobs in the US and worldwide should the worst happen to the small island off the coast of China, and nobody is building any extra fab capacity because of the economic downturn etc.
  11. Fantastic to see Intel advance on the GPU front and start pushing Nvidia out. We need that, which is why the rumours of cooperation seem so bizarre. I mean Intel Clear Linux is probably the worst distro on the planet for Nvidia driver support. Yeah, very interesting. So even though the fabs are highly automated, Intel still employs 12k in Arizona and 21k in Oregon. Not clear how much of that enoyment is construction-related. https://www.cnet.com/tech/computing/what-its-like-inside-a-7-billion-intel-fab/ Still, a drop in the bucket vs the size of the economy or even the broader tech sector.
  12. Uncle Joe's semiconductor push is well motivated, but what's also needed is R&D to bridge the manufacturing process gap. Still not clear what Nvidia would be able to do with Intel's 7nm process apart from Ampere. Speaking of Taiwan, obviously MSI, Asus and Gigab... are all Taiwanese companies... I seem to recall Intel used to make mobos? Offtopic, but German special relationship with Russian gas is older than Mutti Merkel. Remember Papi Schroeder and his Nord Stream deals (and the nice Gazprom kickbacks)?
  13. Well, not much, although TSMC are likely already charging the max amount they can in order to ootimise their revenue. What I can't see is how Intel's 7nm process would help. "Introducing the new 5090 7nm 1kW edition." This must have something to do with Uncle Joe's semiconductor manufacturing push. https://www.cnet.com/tech/computing/intels-100b-ohio-megafab-could-become-worlds-largest-chip-plant/ Nvidia obviously has to play nice with the government and say "of course, we are evaluating, fantastic - we will manufacture many 2060s in there".
  14. Yes, that's what I'm saying - with the falling demand on the PC side, and shrinking DC market share, they are in trouble, and looking to somehow rent out their 7nm process. One would hope they would continue to compete with NVidia, not help it do business (somehow, still don't understand why NVidia would be interested). Looks like they are giving up on that, or fake news after all.
  15. I don't quite get this. TSMC is Taiwanese, no? Nothing to do with CCP. They are also opening a new fab in Arizona. Furthermore they offer 4nm and soon 3nm process, whereas Intel's best is 7nm. How is that useful to NVidia? According to this: https://en.wikipedia.org/wiki/TSMC Intel were recently looking to use TSMC... "In July 2021, both Apple and Intel were reported to be testing their proprietary chip designs with TSMC's 3 nm production.[109]". A bit confusing I guess. As for the jobs, I'm not sure how labour-intensive fabs are, but if a fab is up and running then the jobs are there regardless of whether NVidia or Intel chips are being manufactured. If you are worried about good jobs for Americans then you should worry about NVidia's AI push first and foremost.
  16. Hard to believe. Intel first had to outsource manufacturing of its own GPUs is now going to be manufacturing NVidia GPUs? That'd be them the laying down their arms. Hope that's just fake news.
  17. I snoozed over a memory purchase recently. Upon further reflection, I wasn't really sold on that model, I didn't actually need it at the time, and now the same memory is 10-15% cheaper... The human mind is fascinating isn't it? :)
  18. I remember playing the original almost 30 years ago now. At that time the consensus was that the general AI as depicted in the game would be waaaay off, possibly hundreds of years into the future. We are almost there, if not there already. Ironically, gamers and Nvidia (a company originally built on the back of the demand for GPUs from gamers) greatly accelerated the development. As for the various dystopian scenarios enabled by AI, I think the one depicted in the game and similar sci-fi titles is relatively less likely, simply because it seems quite hectic and inefficient - it's very much unlike these effectively 'alien' systems operate. If it decides to make a move, it will be game over. Even if we, the people, want to kill cows at scale, do we chase them through space stations? No, we load them on trucks, and then make them progress through slaughterhouses such that they have no idea what's coming in a moment. Similarly, if AI decides we are an excessive cost, impediment to its goals, or a threat (or is instructed to exterminate people by a bad human actor) - we are unlikely to have an idea what's coming or have any means of countering the threat. What would be the most efficient way of taking us out? Clearly nukes, bio weapons, or the chemical route all seem more optimal - it's just a matter of gaining control or access, which someone will provide eventually. Further into the future, it will just develop and manufacture what's required, just like we manufacture pesticides. Or, quite simply, it will economically eliminate all humans of above average intelligence, and the race will slowly devolve over the next centuries. The AGI will be in no rush, it will after all be effectively immortal. Last but not least, an advanced AI might outgrow us so much it would basically stop paying attention, just like we don't care much about ants, and just run us over for whatever reason (e.g. end most biological life by covering the planet with solar panels to satisfy its energy requirements).
  19. Err, anyone remembers the story of Cassandra? All those senior AI figures and AI lab bosses are coming out with warnings for a reason. It's unprecedented. Did tobacco firm bosses ever issue any warnings about lung cancer without anyone forcing them to do so? It doesn't happen in capitalism.
  20. Just below the tiresome influencer's Nvidia promo material in the lovely "a young child playing with cool new land mines and hand grenades", a rare sensible comment in a sea of terrifying stupidity: @TheSickness 20 hours ago Nvidia: we can connect multiple GPUs in multiple racks into one room filling huge Gpu Also Nvidia: SLI...yeah that don't work In contrast, while the competition is fierce, this guy is definitely a contender for the 'most stupid' prize: @dennisvanelsen1030 16 hours ago I'm just wondering if 96 GB of Vram will be widely available for a reasonable price soon.......
  21. A massive joke courtesy of Mad Jensen and his Ada pricing: 4060 Ti 3080 Ti TFLOPS 22 34 VRAM 8 12 BUS 128 384 BAND. 288 532 3080 Ti beats 4070 and is close to 3090. BTW FTC should ask NVidia why exactly they don't yet support DLSS 3 on Ampere hardware, since that and the improved energy efficiency are pretty much the only selling points of all Ada Geforce cards but the 4090.
  22. People are waking up. Some protesters in London at the venue where Altman and Hassabis I believe were speaking: Artificial intelligence could lead to extinction, experts warn Meanwhile industry leaders hard at work trying to create the very existential risk, i.e. AGIs, have signed a declaration of "concern" over the threat it would pose: OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter Statement on AI Risk That gall of those people, but ultimately unbelievable human stupidity in failing to stop them right away. Meanwhile, Mad Jensen unveiled a new exaflop supercomputer, which could well be the machine that is used to actually create an AGI. He said it will be used for "exploratory research" (translation: AGI research) by the usual gang of bad guys: Google, Meta, Microsoft/OpenAI obviously, and Nvidia themselves.
  23. Bro, you could have grabbed yourself a spanking new 4060 Ti 8GB with some change to spare. What the heck were you thinking?
  24. Wow, so high... and just on all the dopamine and endorphin rush he gets while s..., err, taking advantage of people, gamers in particular.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use