Jump to content
NotebookTalk

electrosoft

Member
  • Posts

    2,287
  • Joined

  • Last visited

  • Days Won

    89

Everything posted by electrosoft

  1. Like Intel, they had to respond with something instead of just admitting defeat for the moment and ride it out till their next gen product is announced.
  2. You are thermal throttling badly which is reducing your clocks. establish your best UV first and see if you can get rid of that throttling. If not, you might need to get in there and take a look at your thermal application and mounting pressure to make sure it is up to snuff. For thermal testing, make sure to set fans to max to rule out any ramping up issues as the X170 ramps up slowly and won't get to max or proper RPM before you get hit with thermal throttling.
  3. It is kinda all over the map and you get the average which has it close but you have to go into the individual games and see what's going on. You get a game like CS:GO where performance regresses with the 5800X3D but then turn around and you get a massive outliers like this: Which really skews the overall average. This is AMD's turn to create a Frankenchip. It's their 11900k and it is going to be a very game specific to justify picking one up. If all you play is FC5 and BL3, it's apparently a killer chip but I'm going to want to wait and see more detailed benchmarks including 1% and .1% lows when it isn't sitting on that fuel injector of a cache because those low stumbles are deal breakers.
  4. 5800X3D looks to be on average equal to the 12900k/ks for gaming (and obviously nothing else). It depends on the game(s) you play as there are some meaningful gains for each processor depending on game played. If your game already favored AMD it looks like the 5800X3D just kicks it up to the next level. If all I played was FFXIV (which seems to absolutely thrive on AMD hardware) at 1080 or 1440 and everything else I did wasn't multi-threaded dependent, I could see building out a 5800X3D rig. All my daughter's fiance plays is FF and is starting to ask about me building a moderate desktop for him to replace his aging laptop (4810 + 970m). He plays at 1080p and wants to go high refresh. Nice inexpensive MB, 5800X3D, AMD 6600 would be a sweet, little build that would be a substantial upgrade. In other words, the 5800X3D is going to be catered towards a very, specific demographic. Particular set of games that favor AMD, 8 core / 16 thread is more than enough for their normal usage, since it can't OC inexpensive MB, nothing time critical, smaller budget overall. Ugh, I think we both know my past PTSD with the USB issues.....
  5. True, seeing as the 5800X3D is aimed squarely at gamers that is where it will potentially shine but it will have to shine quite brightly to offset the overall performance loss in almost all other performance categories vs 12th.
  6. Well said Bro and I agree. 5800X3D early review numbers don't look too bad (actually quite good as presented) for 720p and 1080p gaming benchmarking, but this is one site so we will see in a few days when the embargo lifts and other sites churn out their numbers but it definitely seems to add a bit of punch. Portend of things to come with 6000 series and V-cache: https://xanxogaming.com/reviews/amd-ryzen-7-5800x3d-review-the-last-gaming-gift-for-am4/
  7. They do actually and they will need to bring overall value along with rasterization gains to 7000 series GPUs. We talk about AMD driver maturation, but Nvidia clearly made strides too with their drivers and the gains from his testing 10 months ago to now coupled with 12th gen are substantial. When he tested RKL, 10th gen was still champ but 5800x was the overall winner but that was before the AGEISA neutering began and coupled with SAM pushed it ahead of Intel. More importantly, the 3000 series "CPU overhead" issue @ 1080p was an issue but either that has been resolved, 12th gen is simply out muscling the issue or a combo of both has produced a massive performance delta that even surprised him. Before, we would see AMD win at 1080p, trade blows at 1440p and lose at 4k. Add a 6900xt to the mix at lower resolutions and SAM and it was even worse for Intel and Nvidia. Fast forward 10 months later and 12th gen plus 3000 series is the clear winner across all resolutions by huge margins. If I had pulled the trigger on a 6900xt to play with a week or two ago based upon previous assumptions and data like I was contemplating and then saw these new updated results (or just experienced them myself first hand), I would have been kicking myself in the keister while absorbing the sale loss on eBay or elsewhere as 6900xt's continue to tank in the used market and aren't really even moving new at $1199 and lower. I usually leave e cores on, but I'm going to disable them and re-dial in my 12900k then start the long process of manually dialing those 2x16GB DDR4 b-die sticks from @Mr. Fox
  8. Hardware Numb3rs is finally back and did an in depth test of 12900k vs 5000 vs 11th vs 10th for WoW. Conclusions? 12900k is a God chip for WoW and smashes every other chip badly including the 10900k 10900k is better overall than 5000 and 11th for WoW but nothing that is a game changer. Only upgrade path is to 12900k. E-Cores hurt performance greatly even in W11 for WoW as the schedule (hot garbage) can't prevent the main thread from drifting to e-cores Memory tuning results in massive gains using DDR5 (he is going to test DDR5 vs DDR4 soon) Coupled with the 12900k, 3090 beats the 6900xt at every resolution massively even with SAM enabled. I'll be passing on the 6900xt now as the 3000 series is the clear winner and I can't see a scenario where the 5800X3D can catch up with that massive gap in performance with its extra cache but you never know. 12900k vs all others: E-Cores off vs on: 12900k + 6900xt: 12900k + 3090:
  9. I need a couple of Windows 10 Pro licenses. Where is the most inexpensive place to pick up a few?
  10. Much better IMC and 5.3/4.0 @ 253w? I'd call that a wrap and dial it in personally. I agree about 2 dimm board. 4 dimms is just more trouble than it's worth with ADL. MSI boards have the best luck with 4 dimm DDR4 and even then 2 DIMMs give you more headroom. 4 DIMMs DDR5 it's a no brainer and everyone is running 2 DIMMs in their 4 DIMM boards. Wasted traces and potential issues.
  11. Yeah, that's way better than mine @ 5.3/4.0. I'm at 295w CB23 on 1304 D4 sitting in the high 90s on an AC LF II 420mm. 252w for 5.3 / 4.0 seems pretty decent IMHO.
  12. That second one is trash, but looking at some of the 12900ks chips over on the other forums and the statistics from BZ, there are a serious amount of crap in there. I don't think I have it in me to bin out 5 12900k's again with 12900ks chips. I ordered one launch day from best buy in their initial batch and it's sitting at BB waiting for pickup. I'm still going back and forth....
  13. Agreed. Toss in great SP rated chips that have a stubborn core that won't OC no matter what and all the great voltage and thermals mean nothing. The 9900ks I bought had this exact problem. Fantastic thermals and pull but had a stubborn core that borked out at 5.3 no matter what you did or how cool it was running. Worked out great for me as it was a killer chip for stock, cool running in my P870TM1-G. 11900k's had this hard stop problem too with a troublesome core. Absolutely great SP100+ samples that wouldn't go past 5.2 no matter what you did. Then some chips live up to their SP ratings (good and bad). I've been binning a few more 10900k's and watching an SP63 just explode to 367w+ at 5.2 needing 1.58w to run CB23 was a site to behold....I'm sure there's even worse out there. But...at stock? It handled a -.110 UV like a boss, ran CB23 20 min loop no problem and while it wasn't as good as a SP95+ sample, it wasn't that far out of range. Turdbooks (and BGA) are the worst in that you're stuck with the hand that is dealt to you unless you send the entire system back or with someone like Dell manage to get a system board swap if you get a poor silicon sample and your system runs super hot and heavy.
  14. I will say this. I've yet to see a low SP score produce a good chip without forcing a lot of volts and chiller (or greater) cooling down its throat. @johnksss You got the magic touch right now brother! SP97 KS = SP105-SP107 K, sheesh. Let's see the P cores and V/F curve on that puppy:
  15. I will give Asrock motherboards props on this. Not only can you turn RGB off in BIOS but they include a version of their RGB control software in BIOS for setting your lighting right then and there. I noticed it when I was building out the mATX B660M system with their Pro RS board. I updated the wife's system and I noticed MSI has overhauled their software suite and it is bigger and more intrusive than ever. Bad form MSI.
  16. That is a pretty righteous SP98 12900k. It's a wonderful feeling when you pull a chip off the shelf and it is a good bin. Do you have a shot of the V/F curve? 1.101 @ 5.1 under load @ 1.288 adaptive? See how it does with Falk's binning baseline: Set your chip to the following: 5.4 ghz sync all cores (P cores: 5.4 ghz), E cores 4.0 ghz, Ring: 4.0 ghz Actual VRM Core Voltage: 1.42v (Bios set). LLC: Level 6 (I'm assuming you have an Asus board). Run Cinebench R23 10 times loop. If it passes, run R15 10 times in a row as fast as possible.
  17. I usually target the non RGB variants. On the other hand, when I swapped the wife from 2x8GB white RGB'esque "pretty sticks" that were running @ 3200 18-22-22 to a pair of 2x8GB G.Skill jet black sticks running @ 4000 C14-14-14 all she had to say was, "They don't match and where's the lights?" Clearly there is a market for RGB color face blast.... My major gripe is still the software. Basic lighting control software that somehow needs to be bloated, intrusive and data collection enabled......right....
  18. Frame Chasers comparing a 12900ks (P cores only) versus his godbin 12900k SP102 (P113) and 12900k SP86 (P96). It does seem to show, yet again, the SP scores aren't equal KS vs K and you will need to weight the KS score by +8-10.
  19. Asus BIOS 1403 is KS compliant but there isn't a way to directly compare SP scores between K and KS so far. In the end, run some baseline tests and see what ya got.
  20. Did you try running the KS test parameters I posted from the OC forums from @Falkentyne to see where your chip stacks up? Like I said, I ran it on my 12900k SP91 for shiggles and it error'd right out of CB23. 😁 Looking at Buildzoid's statistical data it looks like you did get a slightly below average KS as the average from 183 samples was SP90 (P98/E76) but a hardline test is to give Falk's parameters a try.
  21. I saw @Falkentyne posted this over on the OC forums for testing 12900ks chips. See how your 12900k fairs too: "Set your chip to the following: 5.4 ghz sync all cores (P cores: 5.4 ghz), E cores 4.0 ghz, Ring: 4.0 ghz Actual VRM Core Voltage: 1.42v (Bios set). LLC: Level 6 (I'm assuming you have an Asus board). Run Cinebench R23 10 times loop. If it passes, run R15 10 times in a row as fast as possible. If your chip passes, you have an above average chip that had a lower than expected SP rating that performs better than indicated, and then you can see if you can lower the vcore even more." I gave it a whirl on my SP91 12900k (P102 / E69) for shiggles and instant crash. 🙂 See how yours makes out.... LLC6 Asus = LLC3 MSI?
  22. The way EVGA handled my GTX 280 RMA process way back in the day is what pushed me over as being my first choice with all things being equal. The way they handled my Z390 FTW motherboard failure (courtesy of the Corsair H1200 PSU recall debacle) just solidified it.
  23. In addition to @solidus1983 suggestions, you could also entertain the idea you have lost the heatsink lottery and order either another stock model or explore something like this which is beefier, water-air hybrid and supports 1000 2000 and 3000 series cards across the board. https://www.aliexpress.com/item/1005003079962370.html?_randl_currency=USD&_randl_shipto=US&src=google&src=google&albch=shopping&acnt=631-313-3945&slnk=&plac=&mtctp=&albbt=Google_7_shopping&albagn=888888&isSmbActive=false&isSmbAutoCall=false&needSmbHouyi=false&albcp=15229744697&albag=126912416582&trgt=1480551745650&crea=en1005003079962370&netw=u&device=c&albpg=1480551745650&albpd=en1005003079962370&gclid=EAIaIQobChMIrpGG-PKC9wIVirWzCh2LNQbNEAQYASABEgJaZ_D_BwE&gclsrc=aw.ds&aff_fcid=791582db86e649d989e237e31b4b7be7-1649367032112-06408-UneMJZVf&aff_fsk=UneMJZVf&aff_platform=aaf&sk=UneMJZVf&aff_trace_key=791582db86e649d989e237e31b4b7be7-1649367032112-06408-UneMJZVf&terminal_id=f932f1f5e5424abf92c0e80c3d883e7d&afSmartRedirect=y
  24. @reallango here are some runs I did to give you an idea of numbers you should be seeing (depending on CPU and GPU). I always run with fans on auto and let them get to where they need to be naturally versus forcing full fans. Timespy: Here is a stock 10900k run for CB23 with a ~-80uv. Corsair 3800 sticks dialed into 3200 CL14-14-14-336:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use