Jump to content
NotebookTalk

Clamibot

Member
  • Posts

    447
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by Clamibot

  1. Yep, high binned 10900Ks can do some pretty amazing all core speeds if they're kept cold enough. The cold also significantly reduces their voltage requirements and power draw, further increasing your overclocking headroom. You saw the results that just a TEC is capable of getting out of this chip. I was able to do 5.5 Ghz on mine in games initially but could never keep the chip cold enough to keep it 100% stable. Any temperature spike too high and the machine crashed. One thing I really like about my current 14900KF system I just built last weekend is that the system doesn't crash if I overclock just a little too high. Programs crash instead, which makes the trial and error a lot quicker of a process now that I don't have to wait for system shutoffs and reboots every time now. I do want to get the V2 of the MasterLiquid ML 360 Sub Zero, called the MasterLiquid ML 360 Sub Zero Evo, as that will enable me to continue my sub ambient overclocking adventures with the 14900KF. Funnily enough, Skatterbencher did some overclocking using a TEC on almost my exact same setup (he used a 14900KS instead of the KF) and got really good results out of it. He was able to easily do 6.2 GHz all core in Shadow Of The Tomb Raider. Video if anyone is interested:
  2. I'm using Windows 10 22H2. I'm specifically using the WindowsXLite version for maximum performance. Cinebench R15 is able to use the E cores, just not at the same time as the P cores. I can make it use either core types, just not at the same time.
  3. Yes, the cooler sucks until you enable the TEC. There are 2 different control center applications you can use for it. You can either use the Intel official Cryo Cooling software: https://www.intel.com/content/www/us/en/download/715177/intel-cryo-cooling-technology-gen-2.html Or you can download the open source modded version that doesn't contain the stupid artificial CPU model check, which allows the TEC to work with any CPU: https://github.com/juvgrfunex/cryo-cooler-controller I prefer the open source version. Once you get either one of these applications set up along with the required drivers to make them work, just enable the TEC and watch your CPU go brrrrr (literally, the CPU can get pretty cold). If you use the Intel official version, I'd recommend setting it to Cryo mode for daily driving, and enable unlimited mode only when benchmarking in short bursts. If you use the open source version, setting the Offset parameter to positive 2 is equivalent to running Cryo mode on the Intel official software, and setting that Offset parameter to a negative number is equivalent to running Unlimited mode on the Intel official software. Once you enable the TEC, the cooler is good for a CPU power draw up to about 200 watts before you start thermal throttling. I also used liquid metal between the CPU die and IHS, and also between the IHS and the cooler coldplate nozzle. Note, DO NOT use liquid metal between the IHS and coldplate on the MasterLiquid ML 360 Sub Zero if your IHS is pure copper, use nickel plated IHSes only! If you use a pure copper one like I did, the liquid metal will weld both copper surfaces together over time with the TEC active as the colder temperatures from the TEC coldplate and the heat from the CPU accelerate the absorption of the gallium from the liquid metal into both copper surfaces, and eventually bonds the copper surfaces together. Alternatively, you can pre treat both copper surfaces with liquid metal to prevent this from happening. Stupid me, now I can't get the cooler off my golden 10900K. It really needs a repaste.😭 I was able to do 5.4 GHz all core in games on this thing using the TEC, 5.6 GHz single core boost on my particular chip. It can hold all core speeds over 5.4 GHz indefinitely as long as the temperature is kept below 140°F (60°C).
  4. 5.8 GHz P Core only all core run in Cinebench R15: 6 GHz P core only single core run in Cinebench R15: Shadow Of The Tomb Raider Benchmark with the 14900Kf at 5.8 GHz all core: I can do 240 fps raw on this system, mwahaha!
  5. I'm using the BIOS profile that was on the motherboard when I received the parts. Also I'm benchmarking using Cinebench R15. Also funnily enough, my system feels significantly more responsive and snappy with the E cores disabled. I haven't done any official benchmarks for this, but it just feels faster. Looks like it'll still serve as a good stability test for me then. I've always used Cinebench R15 to test my overclocks. If they pass a benchmark, they're pretty much always stable in games.
  6. I'm using the BIOS profile that was on the motherboard when I received the parts. Also I'm benchmarking using Cinebench R15. Looks like it'll still serve as a good stability test for me then. I've always used Cinebench R15 to test my overclocks. If they pass a benchmark, they're pretty much always stable in games.
  7. I just tried that and got the same behavior. Cinebench R15 was adamant that it only ran on one set of cores. I can get it to run on either P cores or E cores, but not both types at the same time.
  8. Since oddly Cinebench R15 refuses to use both the P cores and E cores on my 14900KF at the same time, I ended up doing a P core only run and an E core only run by manually setting processor affinity each time to those respective core groups. The P core only run is the one in orange and the E core run is the one in brown. Looks like 8 P cores is more powerful than 16 E cores.
  9. Solid proof of why high speed, low latency RAM does in fact matter a lot in gaming. Even with the amount of 3D vcache these X3D CPUs have, it's still possible to exhaust that cache many times over in some games. This is exactly the reason Intel CPUs scale so well with fast memory; it helps mititgate the shortcomings of the cache on Intel's CPUs vs AMD's X3D CPUs (in regards to cache size that is).
  10. Uhh, well I'm really glad my house and everything within a few miles didn't vaporize. Apparently my new super 14900KF consumed more than a quadrillion watts of power for a fraction of a second. @Mr. Fox This is some seriously powerful hardware you sold me. I don't know if this is better or worse than someone's laptop CPU running at 90,000,000°C. I'm surprised nuclear fusion didn't occur inside their laptop.
  11. Dual GPU is back in style my dudes! I did some testing with heterogeneous multi GPU in games with an RX 6950 XT and an RTX 2080 TI, and the results are really good. Using Lossless Scaling, I used the 6950 XT as my render GPU and the 2080 TI as my framegen GPU. This is the resulting usage graph for both GPUs at ultrawide 1080p (2560x1080) at a mix of high and medium settings tuned to my liking for an optimized but still very good looking list of settings. I'm currently at the Mission of San Juan location and my usage on the 6950 XT is 70%, while it is 41% on the 2080 TI. My raw framerate s 110 fps, interpolated to 220 fps. One thing that I really like aboout this approach is the lack of microstuttering. I did not notice any microstuttering using this multi GPU setup to render a game in this manner. Unlike SLI or Crossfire, this rendering pipeline is not to split the frame and have each GPU work on a portion of it. Instead, we render the game on one GPU, then do frame interpolation, upscaling, or a combination of both using the second GPU. This bypasses any timing issues from having multiple GPUs work on the same frame, which eliminates microstuttering. We also get perfect scaling relative to the amount of processing power needed for the interpolation or upscaling as there is no previous frame dependency, only the current frame is needed for either process (plus motion vectors for interpolation, which are already provided). No more wasting processing power! Basically this is SLI/Crossfire without any of the downsides, the only caveat being that you need your raw framerate to be sufficiently high (preferably 100 fps or higher) to get optimal results. Otherwise, your input latency is going to suck and will ruin the experience. I recommend this kind of setup only on ultra high refresh rate monitors where you'll still get good input latency at half the max refresh rate (mine being a 200 Hz monitor, so I have my raw framerate capped at 110 fps). To get this working, install any 2 GPUs of your choice in your system. Make sure you have Windows 10 22H1 or higher installed or this process may not work. Microsoft decided to allow MsHybrid mode to work on desktops since 22H1, but you'll need to perform some registry edits to make it work: Open up Regedit to Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}. Identify the four digit subfolders that contain your desired GPUs (e.g. by the key DriverDesc inside since this contains the model name, making it really easy to identify). In my case, these happened to be 0001 fo the 6950 XT and 0003 for the 2080 TI (not sure why as I only have 2 GPUs, not 4) Create a new DWORD key inside both four digit folders. Name this key: EnableMsHybrid.Set the value of the key 1 to assign it as the high performance GPU, or set it to a value of 2 to assign it as the power saving GPU. Once you finish step 3, open up Graphics Settings in the Windows Settings app Once you navigate to this panel, you can manually configure the performance GPU (your rendering GPU) and the power saving GPU (your frame interpolation and upsclaing GPU) per program. I think that the performance GPU is always used by default, so configuration is not required, but helps with forcing the system to behave how you want. It's more of a reassurance than anything else. Make sure your monitor is plugged into the power saving GPU and launch Lossless Scaling. Make sure your preferred frame gen GPU is set to the desired GPU Profit! You now have super duper awesome performance from your dual GPU rig with no microstuttering! How does that feel? This approach to dual GPU rendering in games works regardless of if you have a homogeneous (like required by SLI or Crossfire) or heterogeneous multi GPU setup. Do note that this approach only scales to 2 GPUs under normal circumstances, maybe 3 if you have an SLI/Crossfire setup being used to render your game. SLI/Crossfire will not help with Lossless Scaling as far as I'm aware, but if it does, say hello to quad GPU rendering again! The downside is that you get microstuttering again however. I prefer the heterogeneous dual GPU approach as it allows me to reuse my old hardware to increase performance and has no microstuttering.
  12. Yeah I don't get it either. Matte screens are unreadable under intense light sources because the diffusion of light across the screen from the matte anti glare just makes the screen blurry. At least on a glossy screen, even with glare, it's still sharp, so you can read whatever parts you can still see.
  13. 300 Hz glossy perfection: Bleh matte layer I peeled off: This stupid piece of plastic ruins every screen I lay my eyes on that has it. Glossy forever baby! I got a 300 Hz screen for my X170. Since there are no glossy screen options, I had to take matters into my own hands and glossify it since I absolutely detest matte screens. This came out well just like when I did it on my current 144 Hz screen. It's good to know this mod still works on laptop panels as I know on some of the newer desktop ultrawide panels, the matte anti glare layer is stupidly infused into the polarizer layer istead of being a separate layer.
  14. Yasss! It's alive! I'm glad to hear this project is still in progress and has not been forgotten to the sands of time. I've been looking for and following similar projects when I find them in hopes I can gather enough knowledge to build my own version of such projects. My only critique would be to allow for an 18 inch screen, but I'm also a go big or go home kind of guy, plus I like big screens. Weight is not a concern for me. I'd really like a return of 19 inch class laptops, but I'd like a rise of mini itx laptops even more.
  15. Shadow Of The Tomb Raider Benchmark on my Legion Go (also using a new WindowsXLite 22H2 install): I set the power limit of the APU to 54w and overclocked the iGPU. All 8 CPU cores were active. I also set the temperature limit of the CPU and the skin temperature sensor to 105°C to stop the APU from throttling because it thinks my hands will get burned. The results are a definite improvement over stock settings, but still not up to par with my standards for raw framerate. However, the input lag is not very noticeable when using a controller and setting my raw framerate to 72 fps, then interpolating to 144 fps, thus giving me my high refresh rate experience on this device. Despite raising the power limit to 54w, the system was maxing out at 40w sustained. I'm not sure if this is because the APU was using 40w, with the remaining 14w of the power budget being used for everything else. I thought the power limit I set using the Smokeless tool was for the APU only. I'm definitely pushing the power circuitry pretty hard here as the APU is rated for 28w only. I was able to perform a static overclock on the iGPU to 2400 Mhz and, performance did improve. Interestingly, performance dropped if I tried pushing further as it seems there is either a power limit, voltage limit, or both. UXTU allows me to overclock the iGPU up to 4000 Mhz, but I didn't try going that far. I was however able to overclock my Legion Go's iGPU to 3200 Mhz without the display drivers crashing, so looks like this iGPU has a lot of overclocking headroom left, but is held back by power/voltage limits. I can't wait to see a handheld with a Strix Halo APU and a 240 Hz screen.
  16. Well guys, I benchmarked my 2019 LTSC install vs my new WindowsXLite 22H2 install in Shadow Of The Tomb Raider again (this time using the built in benchmark), and I can confirm the 22H2 install does indeed perform better, even with all my programs installed (which seemed to have no effect on performance at all). Looks like WindowsXLite 22H2 is the to upgrade from LTSC 2019! I'm currently installing it on my Legion Go and will be installing it on my X170 next. I had my 10900K running at 3.7 GHz to induce a CPU side bottleneck. My GPU is a Radeon RX 6950 XT. 2019 LTSC: WindowsXLite22H2 (minimalist gamer only installation): WindowsXLite22H2 (all my programs installed + some extra services running): WindowsXLite22H2 wins by about 2%. I did not expect this at all. I was expecting a performance downgrade, but I am very happy I got a slight performance upgrade instead. You don't see that very often with installing newer versions of windows. I like that Shadow Of The Tomb Raider is useful both for CPU and GPU benchmarks. This makes it an easy all in one benchmark that saves me time as it will give me a general idea of performance differences between different machines and windows installs.
  17. So far I've gotten my 4000 CL15 32 GB (4x8GB) kit to 4200 with 1.5 IMC and 1.54v on the memory itself. I haven't tried pushing further, but I do think there's headroom left if I push the memory voltage higher. Samsung B Die is awesome! This is of course dependent on the motherboard. I have an MSI Unify Z590 motherboard, and I know these were made for overclocking. If you get a similar quality board, you should obtain similar results.
  18. I've had a lot of fun with the ML360 Sub-Zero. That's the cooler I've been running the entire time I've had my current desktop. It's incredibly good for gaming and allows me to do 5.6 GHz on my 10900K in games, which really helps with games that have an artifically induced single core performance bottleneck. I also know liquid metal wasn't recommended with this cooler, but I did it anyway and that made the temperature results even better of course.
  19. Clevo X170SM-G user manual: https://www.manualslib.com/manual/1890112/Clevo-X170sm-G.html#manual I think G-Sync won't work once you upgrade the GPU, but I personally wouldn't worry about it. Gsync is a gimmick anyways since we'd always want our games running at the monitor's max refresh rate anyways for the best experience. I lost Gsync after replacing my laptop's monitor, and I found it really didn't matter. I haven't missed Gsync at all.
  20. Oh you're using Windows 11 builds? My WindowsXLite install is based on a Windows 10 Pro 22H2 build. To add to that list you have: (6) Disable unnecessary security mitigations like Spectre, Meltdown, Core Isolation/Memory Integrity (Virtualization Based Security), and Control Flow Guard (7) Install DXVK Async into games that see an uplift from it (8) Use Lossless Scaling (you get best results if your raw framerate is already 100 fps or higher with mouse and keyboard, or if on a controller, if your raw framerate is already 72 fps or higher). This software is absolutely amazing! (9) Perform settings tuning in games. I find that some settings barely have a difference between low and ultra, especially in newer games. (10) Disable anticheat and/or DRM if possible
  21. After some preliminary testing with Shadow Of The Tomb Raider in Windows 10 LTSC 2019 and WIndowsXLite 22H2, it looks like the CPU side performance is actually slightly higher on the 22H2 installation. I'm using the maximally tuned OOB install without Windows Defender. I can probably do a bit more tuning with it too. My LTSC install is also tuned with stupid security mitigations disabled, but it's probably not tuned as much as some of you guys have managed to do with your installs. GPU side performance seems to be higher too, but I don't know by what amount as I was only testing CPU side performance. I need to get some concrete numbers, but this is looking really good so far. To be fair, the 22H2 installation is currently set up as a minimalist installation purely for game testing, so I need to install all my programs onto that Windows install and compare again as that could affect the results. It probably won't, but I need to cover all my bases. If WindowsXLite 22H2 truly outperforms Windows 10 LTSC 2019, I will be happy. This will be the first time I've seen a version upgrade in Windows actually deliver a performance upgrade rather than a downgrade (other than upgrading from regular consumer editions to Windows 10 LTSC since I've seen this firsthand, and that jump in CPU side performance was significant). I'll then need to upgrade all my systems🤪
  22. As the title says, humble bundle currently has a deal for a big collection of Sid Meier games. This bundle contains the past 5 Civilization games + DLCs for Civilization 6, and some other Sid Meier games I'm less familiar with. You only have to pay $18 to obtain the entire collection, so some get it while it's hot! The deal lasts for another week as of the date of this post. Enjoy! https://www.humblebundle.com/games/2k-presents-sid-meier-collection?hmb_source=&hmb_medium=product_tile&hmb_campaign=mosaic_section_1_layout_index_1_layout_type_threes_tile_index_2_c_2kpresentssidmeiercollection_bundle&_gl=1*1xmxnzv*_up*MQ..*_ga*NzcxNjUwNTUzLjE3MzE0ODE2NDU.*_ga_BBZCZLHBF6*MTczMTQ4MTY0NS4xLjAuMTczMTQ4MTY1Mi4wLjAuMA..&gclid=Cj0KCQiAlsy5BhDeARIsABRc6ZucfeiADFbBRCh22Int8tkA_caIeK7Kb3vtXXRRJo29B__usIKX2b8aAgMLEALw_wcB
  23. Christmas came early for me this year! Lol, yep, you get more done and you get your money's worth when buying from bro @Mr. Fox. A super duper mega ultra deluxe hardware bundle being placed right into my hands. You betcha awesome is here! Lots of goodies! I haven't gotten anything set up due to work being very busy this week, but I'm definitely looking forward to assembling and tuning 2 systems! One will be for me and the other will be for a buddy building his first desktop after graduating college and years of using laptops and being dissatisfied with the diminishing options for upgrade paths. Full hardware bundle contents: - Super bin i9 14900KF - i9 14900K - Asus Maximus Z790 Apex motherboard - Asrock Z690 PG Velocita motherboard - Super bin G.SKILL 32 GB 8000 MHz DDR5 memory kit (pre tuned to 8400 MHz CL 36 just for me!) - Crucial 16 GB 6000 MHz DDR5 memory kit - Gigabyte Aorus Xtreme Waterforce RTX 2080 TI (bro Fox even included an air cooler for it!) - Iceman direct die waterblock - Iceman direct touch RAM waterblock - Bykski RAM heatsinks for the super bin G.SKILL memory kit
  24. As an owner of an Optane 905P, I feel qualified to answer this question. Depending on what you're doing and your setup, you may or may notnotice a difference. If you use Optane with something like an i7 7700K, you probably won't notice a difference vs a standard m.2 SSD. If you are however using Optane with a recent platform (like 14th gen or current gen), you will definitely notice a significant difference. I currently have an Optane 905P installed in my system with a 10900K in it, and the difference is significant. The first thing you'll notice is that the system feels more responsive. I can't assign this particular metric to a number, but it feels snappier vs using a standard SSD. For everyday usage, programs load significantly faster (provided your CPU isn't a bottleneck on load speeds). You can also open a ton of programs at once and the Optane drive will just blast through your file read requests. Optane also excels at small file copy speeds due to the much faster random write speeds vs a standard SSD. Optane also doesn't slow down as it fills up vs a standard SSD, so you can load these babies to the brim and not see a decrease in drive performance. The most major difference I've noticed is in file search speed. When doing development for my job, I osmetimes have to look for particular files. With Optane, I can search the root directory of a Unity project using windows explorer file search (mind you, our projects are pretty big for VR games, at least 30 GB or larger for the repository), and my 905P will have already returned a bunch of results after I snap my fingers (it's still searching, but at least it found a few files right off the bat). File searching is so much faster on an Optane drive. If I were to perform this same task on a standard SSD, it would take a bit before the file search returned any results, even the initial few. For my development workloads, code compiles faster, asset imports complete significantly faster, and builds complete a bit faster. For gaming, games load faster, especially open world ones. Any game that does heavy asset streaming also has loading microstutters gone. Both development workloads and game load speeds will continue to scale with ever faster CPUs on Optane whereas they've already kinda hit a wall with standard SSDs. Oh yeah, also hibernate and wake from hibernate is far faster on Optane vs a standard SSD. So Is file zipping/unzipping. So overall, Optane is a must as a boot drive if you want the snappiest experience, or if you're a developer like me, or just want the best game load times, or if you do tons of file operations (especially with small files), or if you want some combination of the 4. Optane benefits newer platforms far more than older ones as newer CPUs can really take advantage of the throughput of Optane random read and write speeds.
  25. The 9800X3D seems like a really good candidate for Intel's Cryo Cooling watercoolers. Fortunately it's now possible to run those waterblocks and AIOs using modified Intel Cryo Cooling software that doesn't have the stupid artificial CPU restriction check from here: https://github.com/juvgrfunex/cryo-cooler-controller
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use