Jump to content
NotebookTalk

electrosoft

Member
  • Posts

    2,651
  • Joined

  • Last visited

  • Days Won

    115

Everything posted by electrosoft

  1. Refer back to my post about SSD streaming. This is the issue and I am not sure even one of those drives would alleviate the issue in totality. It may help in windows where it is right on the edge but for larger transfer/load scenarios? No.
  2. Very common to newer games with insane asset sizes. Less common to older games on newer hardware but it was always an issue. Best experienced when a game has a speedy travel mechanism that streaming assets can't keep up with and you get a nice "chunk" as they load in unseamlessly. Not much they can do. Even the world's best SSDs can't help....yet. The dream goal is to seamlessly stream assets from SSD->Vram. We may in the future get to a point where we can sacrifice fidelity for fluidity if at all. @tps3443 in this scenario, the way games are designed, even 24GB of Vram won't fix the issue as assets are fixed to various sizes and scale down based on Vram available. Usually, if you turn down resolution/detail level, you can avoid it sometimes as the assets can pass off or stream easier. WoW has long suffered from this with its engine re-design (more of an overhaul) 5-6 years ago. Recently it has really come to light with dragonriding which allows burst travel almost 3x faster than the fastest normal flying. Depending on terrain and position, you can trigger this "chunk" and watch your fps drop in half for a split second at repeatable locations in the game. If you drop your resolution from 4k to 1080 and/or details to med-low, it magically gets better or flat out goes away. Back in the day, we called this "look ahead streaming" "dynamic draw fill" and other nifty terms to describe the process. It is also why many games offer "view distance" options to not only take a load off your hardware but to minimize asset loading and make it more manageable. @D2ultima "Traversal Stutter" I like that! 🙂
  3. Hey @Etern4l! Did your 13900k used to run fine at auto but now won't pass those same tests? Did you change BIOSes? Does it fail on 0x129 using the Intel Enforced Limits? Usually for a degraded CPU downclocking and a positive offset are required. The fact it responds well to an undervolt is encouraging it may be ok or the degradation is minimal. Delidding a CPU and not using LM really defeats the purpose of replacing the stock sTIM especially with the stock lid as there is usually no absorption/hardening (3rd party copper top can be a different story).
  4. Wukong on a CL. 5.9/4.5 Vcore load up to 1.43 seems a bit high ? I'll have to go back in and re-check my profiles and see because 1.43 under load auto? Fixed?
  5. Funny, I came away with the same conclusion. 🙂 "Oh, here's something for @tps3443 to aspire to" and "Yeah, I'm good with the benchmark" Don't do it! You know you'll regret it! But if you DO do it, make sure to post all the results. 🤣 True on the 4090. I am sure you could hit that on mine no problem seeing as it does 3120+ on stock Vbios and stock AIO cooling. I have no idea where it would go blocked let alone chilled lol. I know you have to be excited about this bro! Once we downsized, we sold our monster sized house and moved into a Double Wide, costs dropped by insane levels across the board. 15 years later, my lady now keeps sending me listings for "smaller houses...not as giant sized." She wants the privacy of a house and land around it again. Looking forward to the finished place. You DID make sure it has a dedicated computer/office room for you?
  6. Here you go @tps3443. System gaming stats to shoot for! 4090 - 3150mhz 14900KS - 64x / 49x (SP111 P126 / E81) DDR5 8800 CL38
  7. This is what? 10 years now since the NBR forums? I realized X amount of years ago that you buy Intel/Nvidia hardware day one or close to it and give it a really competent set of early testing that tends to stick and align with my approach to hardware so I always trust it and it has yet to be wrong.... .....why fix what isn't broke? 🤣 So yeah....I'll be on the lookout for your results. 😜
  8. Ditto..... If I wasn't into gaming and I was new to computers, 9950X makes sense. I would pick it over 14th gen. But right now, the smart move is to wait for 15th gen and X3D variants.
  9. I'll wait for day 1 @Talon results to see where I'm going.... 🤣 BTW, saw the thread for the SP103 over yonder....yikes that's going to be a poo poo storm and then some. JayZ on how to check for degraded chips.....
  10. Yeah, I tested it on my 14900KS setup and it compiled just fine with my original profiles. Wukan Benchmark just hit differently as noted by many here and elsewhere. I also think there is a wide crossover for software issues and hardware issues and the easy out is to blame Intel atm as we see below: The fact it reaches 100% shows it most likely isn't a shaders compiling issue but the problem lies elsewhere. Sometimes people confuse buggy software with hardware issues. This is true for Intel and AMD platforms.
  11. I have hopes for Arrow Lake. AMD 9000 really isn't exciting. Slim hope for 9950X3D/9800X3D to provide some type of meaningful uplift. AMD GPUs are apparently regressing next gen overall but supposed to have stronger RT? Help me 5090..you're our only hope. 🤣 Right now, I'm financing my daughter's College Education and she's in year 2, so any excuse to pump the brakes is ok. The bill from the Bursar's office just hit and it was spicy! 🤑
  12. Pity, the 5600Mhz V/F is actually decent. If it won't run anything even on your killer setup, DD and stability only at those settings and load voltage, that chip screams defective. I don't think I've ever read of a degraded chip to that level. Either way, need to relid it professionally and send that thing in for an exchange from Intel.
  13. There have been plenty out of the box new 14th gen CPUs that were failing on auto settings because they were garbage, but this one seems like uber trash. It could be a dud , abused/degraded or a bit of both. That is stupidly high load voltage for 48c at 5.6. How does it work on pure auto? Incrementing + offset to see if you can find a level of stability since it is DD/water chilled to take heat out of the equation.
  14. Hmmmm, I think I remember someone basically saying the same thing. Lemme see if I can find it...ah yes....here it is.... 🤣
  15. Considering you said your SP99 vs SP108 was fairly close with chiller love (temps rule everything), I am sure you will get amazing results from it too. I'm always curious about the IMC.
  16. Apparently the new Final Fantasy XVI Demo on Steam released on the 19th also is giving some users fits with its shader compiles. I just ran it on my main desktop (7950X3D/4090) and no problems at all. I'll try it on my 14900KS/7900XTX setup later to see how it stacks up against the Wukong fun times. https://store.steampowered.com/app/2738000/FINAL_FANTASY_XVI_DEMO/
  17. Yep, I called the original 4070 the sweet spot and the 4070 Super is the ultra sweet spot. It is a no brainer for someone with a ~$600 budget for a GPU (or what we previously called the flagship pricing years ago....). I keep coming back to it for my SFF build on my charts with the 14900KS...... Depends on the title and consumption requirements. Look at Starfield.... you can see as it gets hungrier they start to separate but for mid to lower tier power requirements they all kinda just end up in the same area for the bigger chips. A 7800X3D would wipe the floor with the 14900k for consumption and beat it overall in gaming (it gets trashed everywhere else). Those results really don't show Intel in the best light even (look at the memory) and it is still winning. I am expecting Ultra 200 series to hopefully bring some heat. The laptop variants are......ok. The top of the line Ultra 9 185 is basically a slightly better 13600k. The 9800X3D will just step right back in and take the top spot......over the 7800X3D but not as massive as YT keep saying because..... ......of this. A properly tuned 13th/14th gen can hold its own with the X3D chips but it takes a level of refinement that normal users aren't going to do nor should they have to do. Intel is the tinkerer's chip as it yields the most results from getting under the hood. AMD has a lot of "set and forget" properties which has their merits. Most users are sitting on air coolers or rando AIOs, using factory thermal paste on the AIO and booting up and going from there with factory defaults. I respect YT channels like Testing Games that present a more realistic "end user" experience but also acknowledge we will never run anything even close to that. When you run an out of the box experience, Intel suffers the worse as it has much more tune-ability. Look at the memory settings. 9950x is running at the 6000 sweet spot while Intel is chugging along at 6000 too. If you want a more realistic AMD vs Intel vs Nvidia channel, I watch Bang4BuckPC Gamer. He tunes his hardware on a CL, and routinely does comparisons with all his hardware (X3D, 14900k, 7900xtx, 4090, etc...). Intel can push the new MC, but it still comes down to MB makers responsibly enforcing it while also allowing advanced users to disable it if they see fit. I am sure we will see a few more BIOS updates fleshing it out. As for the SA Bug, when all is said and done, I am still setting my SA back to 1.18 for 8200 because it works and no need to feed more voltage than required. Of course for benching and testing, I let it run free. 🙂
  18. With the way I tier my profiles based on power requirements for games to boost my clocks and scale, it will quickly show which games are sucking down the most power and require higher tier/lower clock settings on my modest AIO setup. This was the first game I ran into that all my previous auto/lower power tiers crashed. This was even before it could get to any type of shader loading even. Shaders just kicked it up a notch. I had no problems running it on my 4090/7900xtx and 7950x3d/14900ks (post miser power acknowledgement lol). Oh, and happy birthday @Papusan! Hope it was a good one and here's to many more brother! True, but unfortunately joe user isn't going to know what or how to handle it and either refund the game or blame the hardware that might be unstable on many fronts. Me personally? I like tinkering and figuring out wth is going on. 🙂 297w on your setup is serious for a game, yikes. This might be a good wake up call for Intel and to an extend AMD going forward to maybe not push their chips so hard trying to one up each other and get back to static, realistic settings.
  19. I do not have that one. Is it free or is there a demo? With adjustments (using my Y-cruncher/OCCT profile), Wukan works perfectly now. I just didn't know I was going to have to roll out the bigger guns profile to do it but it's understandable. Even thought it pulled only 226w max at 5.6 compiling shaders and running the demo, it still needs 1.31v set (down from 1.32v) and 1.184v under load to run. Hogwarts and Fortnite UE5 worked just fine but Wukan must have some special sauce mixed in. 🤣
  20. Ugh, MSI is supposed to be right up there by Asus in regards to BIOS controls. One thing I can say about Asus is they really do give you the kitchen sink and then some. EVGA was up there too. I always considered MSI next but watching BZ flip around in Gigabyte's BIOS, it seems pretty competent these days too. Asrock is missing a lot of fine/granular controls and I find their FIVR controls a bit lacking. So far, it is Asus and Gigabyte with the ultimate cut off valve for vrout max settings. When it comes to games, I have always run a mix of configs allowing me to scale to the games demands and reap the rewards in clock settings. It lets me really wring out every last bit of performance per game from my chips without trying to implode them. 🙂 What I usually do is when I find a game or stressor I want/need to run, I will then adjust around it and create another profile tier to run. For example, WoW, Starfield and FO76, I can even get away with tuned/dialed in auto and run it up to 6ghz all core if I want but 5.9 is the perfect sweet spot. I could actually probably run FO76 6.1ghz all core most likely. CP2077 I can do 5.8 all core. This Wukan benchmark? I had to go back and use my "big boy" profile at 5.6 which means I'll have to adjust up 5.7 and most likely 5.8 won't run but it might. 5.9 is definitely a pipe dream on my lidded AIO setup. With that being said, I absolutely agree that real world stability is the best as it assures everything will run, but the cost is performance left on the table when a game can run more with less. Did you ever find out what made your monitoring text turn color and what it meant when running this benchmark? I'm curious.
  21. Yeah, the power draw is one thing but having to actually use my Y-cruncher/OCCT profile to get it to run for a game benchmark is crazy. Using my other settings, it would just exit out before it even got to the shaders compiler screen. I'll have to go back in and figure out where 5.7 and 5.8 can get it to run too unless the pull/heat gets a little too much for my modest setup. Glad to know I wasn't seeing things as I tried three times to get my adjusted 66 to then show up on the final results screen. Seems like over on the OCN forums, they have all decided on max everything at 4k, Super Resolution setting of 75 and trying to see how many can break 80fps and by how much. I saw a couple 83fps scores. LOL.... I chuckled nicely at that one. 🙂
  22. It's 66. I don't know why it kept showing 65, but when I went back in and manually set it to 66, it would show 65. I actually did the run *3* times trying to get it to show the 66 I selected! 🙂 Thinking about blocking it? Max everything out and set Super Resolution to 66 to see how it stacks up to @tps3443 and my scores at the same settings.
  23. And here is some AMD 7900XTX runs on the 14900KS. I had to actually bump my Vcore up to 1.32 for 5.6/4.5/5.0 (1.2 under load) to get it to run properly not just with shader compiles but normal use too. This thing is hungry. I think there are going to be a lot of systems that are going to crash/won't run this on launch. This is the same profile I have to run for even small OCCT and Y-cruncher testing. I can pass CB23/CB15 at 1.29 on the new bios and 1.28 (1.152v load) on the old bios. But this thing required the same profile as the ultra hungry stuff. Temps were good though! 🙂 This is why I like having multiple profiles. I can load up 5.9 all core auto tuned for WoW and FO76 and 5.6 as above for any heavy hitting. As for AMD, er, well.... let's just say RT really isn't in the future for them with this game in any meaningful way. Using the same settings as my previous run on the 4090 (and same as @tps3443😞 7900XTX stock + 14900KS fixed 56x/45x/50x RT ON: (Note: this run was the first run and includes shader compiles data in the HWInfo) 7900XTX stock + 14900KS fixed 56x/45x/50x RT OFF:
  24. Pure 100% stock CPU and stock GPU, mem at 6000 only but tuned. IF at stock 2000. So 6fps difference? That's it? Hmmm.... And yeah, the shader compiles are brutal. I'm fishing through various setups on my 14900KS on the AIO to find what works best....7950X3D just shrugged it off and said, "and what?" 🤣
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use