Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,896
  • Joined

  • Days Won

    13

Everything posted by Etern4l

  1. AsRock tends to rank at the bottom of the pile. If you are particularly concerned about reliability then MSI might be the Most Sensible Investment (albeit possibly Asus might be better for OCing).
  2. Thanks, great info. Do FEs even support VBIOS flashing, risky as it is without dual VBIOS?
  3. Out if interest: 1. Do we have a reasonable estimate of what the realised GPU brick rate is for this procedure? 2. Any way to take a snapshot of the original VBIOS to have the option to restore the GPU to the original state?
  4. That’s it. I recall getting to the end of it (again, alone lol) without any super-serious regrets, although it was a trip. I will refrain from commenting on the ending, other than: JJ Abrams.
  5. Enjoy those early seasons.
  6. GLHF. Mine managed to get maybe 3/4 through before dropping out, which was quite impressive lol
  7. Didn't realise a different cable was involved. Reasonable to assume you would need that too then. Maybe you can find out what the original panel was and check the connector type in the archive section.
  8. Just to mention the obvious, you don’t need any deals to start playing around with Linux safely. Just a bit of free disk space and a virtual machine hypervisor: VIrtualBox, or VMWare on Windoze I guess.
  9. Depending on how technical you are, Gentoo is probably one of the worst distros to start with. You used to have to compile everything form scratch - it’s kind of educational, but I would recommend you start with a distro where people have already done the hardest bits for you, and tailored things to your use cases. There is a lot to learn regardless. Negotiate the SSD deal down to some Ubuntu or Arch derivative lol
  10. OK, these measures (4090D and RTX 5880) do absolutely nothing to curtail AI research in the region of reference, although I guess Nvidia is happy to sell cut down units at elevated prices. However, these are still lower-end devices. If the H100 is completely banned, that would actually slow things down, but also cause higher demand for the 5880 AD102 units which kind of provide a way to side step the ban for many applications.
  11. Which professional cards use cut-down AD102? The below would suggest: none. https://www.anandtech.com/show/18781/nvidia-unveils-rtx-ada-lovelace-gpus-for-laptops-desktop-rtx-4000-sff There is no Ada version of A5500 or A5000?
  12. I have an m15 R1 144Hz so that’s encouraging, given that it was supposed to be the same mobo.
  13. Well, kind of. Datacentre AI cards use H100, a different chip, the full fat AD102s are going to the lighter AI/workstation cards (most likely the reason why there has been no 4090Ti/S), the cut down AD102s go to gamers/pros on a budget in the form of 4090, and 4090D gets (in theory) further scraps from that. I guess pricing/availability depends on the AD102 yield in each of those buckets and on the demand for 4090Ds. If there is strong demand/price support for 4090Ds in a region of implied reference, what's stopping NVidia from just disabling cores on the 4090 AD102 and selling them off there as 4090D? BTW I'm sure Jensen&co would be super-duper extra careful to ensure that those disabled Cores cannot be re-enabled by flashing a different VBIOS for example... Actually, are those "significantly limited" 4090Ds available in the West? Could be an interesting offering for those who care more about the VRAM than the 10% performance loss (in case there really is no workaround...). Couldn't find any.
  14. That’s weird. Stock to boot here sub 2k. It’s possible they are playing the scarcity game again in preparation for a 4090S drop or are just diverting units overseas in the form of “compliant” 4090Ds. We’ll need to wait like a month for the dust from the 4080 S drop to settle. !remindme 1 month e v E: where are we with 4090 pricing? 😉
  15. You got lucky that way, yet they are still indirectly devaluing the 4090 via the 4080S. Many gamers today would be hard pressed to pay almost 100% premium for the extra RAM and +17% performance on average (techpowerup), and those who can leverage a multi-GPU setup might well prefer 2x4080S with 32GB of VRAM to a single 4090. We can expect primary and secondary markets to respond to that (unless 4080S gets gimped/messed up on purpose). Anyway, one could have expected a 4090 refresh by now, or at least a price cut, but I guess they have been too busy with the all-important 4090D and AI units, and/or just waiting to see if they can milk enthusiasts and pros a bit more (in absence of any competitive pressure, to be fair).
  16. What’s the result of your discovery? It’s a 285W card and you mentioned you were running it at 280W?
  17. Hard to disagree. For starters: why are all those cards voltage and power limited? If someone has a good WC rig, they should be able to risk accept a possible loss of warranty and push the performance. In fact, manufacturers would ideally test OC and provide guidelines on this to help people enjoy their OCed rigs. Why are they not doing that? A few reasons: 1. Bad for business - look at @tps3443 who was able to run his 3090 way past the specs for years. Same reason NGreedia killed SLI: to prevent people from being able to scale up their systems on the cheap 2. Insufficient competition - enough has been said here on this, other than the crowding into NVidia camp is just making this worse. I imagine NVidia holds some patents around SLI, which would explain why neither AMD nor Intel have opted to attack from that angle 3. A possibly underhand concern for energy efficiency - I don’t think Jensen&co are worried about the environment, all they care about is the fact that more demand for electricity causes increased energy prices which in turn hurts the AI market and their main business now. 4. And yes, the general AI craze has lead to the manufacturers scrambling to focus on that sector, everything else has to take the back seat All or those concerns would be greatly diminished if people were able to take a coordinated action to address them, instead of acting myopically.
  18. Err, you opened yourself a thread for that? It would be amazing to read your thoughts on the pros and creative applications of AI there. I know the world is going to hell, so many will opt to just have some mindless AI fun, but less keep this thread factual and focused on the topic.
  19. Some of the world's most trustworthy people out on a AGI PR damage control duty: Zuckerberg and Altman both tamp down fear and hype with casual statements about AGI. ---- Meanwhile, the AI march towards military applications continues: OpenAI opens the door for military uses but maintains AI weapons ban --- Last but not least, Googlers are hard at work at replacing Googlers, sign of the lovely times I guess: Google lays off “hundreds” more as ad division switches to AI-powered sales
  20. Looks like someone finally spotted the glaringly obvious opportunity: Could Apostrophy OS Be the Future of Cellphone Privacy?
  21. Dicey, depending on the software/title. Roughly 1/3 of the 13900K performance. I keep hearing current CPUs aren't enough to feed the 4090 (desktop).
  22. I can assure you that my existence/experience is not in the slightest compromised by not gaming on a flagship NVidia GPU, in fact it’s greatly enriched by that, including through the resulting gigantic time savings. It’s pretty arbitrary how people define the quality of their experience. A mobile game addict will say all they need is a flagship phone and a few hundred per month spent on microtransactions. Another guy will claim nothing but driving a Bugatti will do 🙂 Occasionally these days, someone will rate their existence in terms of the quality of their creations, what books their read etc., or in terms of the degree to which they have been doing the right thing for their community, their country, the world.
  23. Hmm, that was released over 3 years ago, but I can totally see your point. I can’t believe how much I paid for a 9800 GX2 15 years ago. Those tech purchases can haunt you forever lol /s
  24. “That much” is actually appropriate. The jump from Ampere to Ada was much larger than that from Turing to Ampere, which was kind of meh on the architectural level - they worked around this by scaling the GPU and VRAM, resulting in a “large” 3090 unit. Now, thanks to the seriously improved Ada architecture, they were able to offer a decent performance uptick while actually slashing VRAM specs sub 4090. With 4090 they released a chungus-class device with fast VRAM, hence the unprecedented in recent history performance improvement. To maintain anywhere near that momentum they would need another spectacular architectural breakthrough, probably still coupled with faster VRAM and bigger chassis. A 6 slot GPU for $3K anyone? Apart from bro @electrosoft who already preordered 🙂
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use