Jump to content
NotebookTalk

Mr. Fox

Member
  • Posts

    4,850
  • Joined

  • Days Won

    508

Everything posted by Mr. Fox

  1. Smart move, by a smart guy. Many of them are little more than a glorified smartphone. Hardware is being toned down and the OS dumbed down. That said, they are not as unpleasant and cumbersome.
  2. Here is a thread to show off your system. I know some of you are rocking some pretty darned impressive beasts, so don't be shy about sharing your passion. You can highlight the build in whatever manner you feel appropriate, whether it is focusing on certain components, the customizations you have done, or simply posting pretty pictures of your pride and joy. If the details you are sharing are not already captured in your signature, please add a list of the components/specs. WRAITH 2.0 - For Fun BANSHEE 2.0 - For Work @Papusan @pathfindercod @electrosoft @Talon @johnksss @Rage Set @Reciever - all of you have stuff worthy of bragging about here.
  3. I think this might be my favorite Linux desktop look/feel. It is a mixture of Windows 7 and 10 GUI elements.
  4. The information is there, but it is not at our fingertips the way it was with Windows 7 and earlier versions of Windows 10. One of the issues that holds Linux back from being a better product is Linux zealots that are driven by irrational hate. They are just little bit worse than the dummies that complain that Linux isn't exactly the same as the reason(s) people leave Windows in favor of Linux. Somewhere in the middle, the "do what makes sense" and not being hesitant to copy or emulate a good idea we would find a solution but for the extremists on opposing ends of the spectrum.
  5. The 5800X3D seems like it is designed to make a PC the equivalent of an overpriced gaming console. I think I would have a whole lot more respect for all of the Big Tech vendors if they would come right out and say, "Sorry kids... we've got nothing new right now. When we do you can be sure that it will be awesome, but there is no point in us wasting your money or burning our own calories on releasing crap just for the sake of saying we have something new when it is actually nothing special. So, please spend your money on the killer stuff that we already gave you, or take time to enjoy it if you already have, and just chill until something great surfaces."
  6. So, I had been using primarily Linux Mint and Pop_OS! with Cinnamon DE. With my recent upgrade to 12900K there were a number of things not working. I discovered that the latest beta of Fedora adds functionality to what was broken with Mint and Pop_OS! In particular cpupower-gui works as intended. You can now control the clocks of each core (both P and E core). I like it better than the last version of cpupower-gui that I was using with 10th Gen. It adds a good amount of features that older versions did not have.
  7. Okey-dokey. As the title implies, here is a thread for Linux users to share new information, tips, tweaks and help others fix issues they encounter using the Windows replacement of their choice.
  8. Nice. Fedora/RHEL added 12th Gen support and now the cpupower-gui utility works properly. You can change clocks per P-core and E-core from the Linux desktop.
  9. I hope the scalping bastard pigs that hoarded GPUs and drove the prices to insane levels lose many millions or even billions of dollars and flood the market with more GPUs than the market can come even remotely close to handling. It would be great if GPUs started selling for like 20% of MSRP with a supply that is 5 or 10 times greater than the demand, LOL. Imagine getting a 3060 for $150 or a 3090 for $350, and as many as you want. That would be awesome. 3090 SLI for benching. 😁 Edit: with a BIG power supply. No 500W gamerboy PSU.
  10. Unrelated matter, last night I was looking at a lot of things closely and noticed the Cablemods sleeved cables were getting kind of scary looking. I already knew some of the 24-pin motherboard wires had melted insulation from the 7980XE overclocking making the cables overheat. I saw the same thing starting to happen on all three 8-pin GPU cables. One more so than the other two. So, I installed the brand new EVGA 8-pin GPU and 24-pin motherboard cables that came with the 1600W PSU that I had never used.
  11. That is the problem right now. I have no spare parts available that I could use for testing at the moment. I sold off lots of the spare parts collecting dust and used cash from those sales to help pay for new stuff I use daily. Spare CPUs, motherboards, air coolers, RAM, waterblocks, etc. were sold or given away. I even built an entire desktop from spare parts and sold that. The only thing I had to acquire was a mid-tower case, and I got one for free from a relative that had no use for it. Here is the system I built and sold using spare parts with the free case. I made $650 off of parts sitting in boxes in my closet that were not being used for anything.
  12. That is always true, especially for GPU. Even without overclocking, lower temps generally result in higher GPU core clocks. If you are enjoying record-breaking results with your laptop, might be best to not "fix it" right now. It might not perform as well if you disturb things. I would say leave it alone if it is working well and hold off on changing anything until your results start trending in the opposite direction.
  13. Yeah, that makes perfect sense. In fact, that was part of my thought using the EK manifold since the CPU and GPU can flow independently (in parallel rather than series) and have totally unique flow rates. To test that I need to remove an internal plug from a chamber and relocate one QDC fitting to another port on the manifold, so that should be easy to do as soon as I take time to do it. Otherwise, I will have to spend money to buy parts I do not have just for testing. I may do that anyway. I can probably grab stuff from a yard sale or Craigslist cheap enough.
  14. Yes, I agree. I need to primarily focus on the GPU core temps on chilled water, but I don't think I have the parts I need to isolate the CPU from the loop. It would not make any sense to test the GPU without chilled water because that is not an issue that needs to be resolved. It's not an overall issue either. It's only that I can't get it cold enough for higher GPU core clock when benching. It's perfectly fine for everything else other than that. And, there is a huge difference between the temperature of the chilled water in the loop and the GPU core. I think that is where it seems logical to start the witch hunt. Probably focusing first on the contact point and TIM sitting between the GPU block and the GPU core. There is roughly a 20-25°C delta there.
  15. I don't have anything I can use on the GPU that I can think of. I could use the pump/block from the KPE AIO if you are finished using it, but I do not have a spare GPU water block and do not know how I could cool the GPU without the Hydro Copper block on it at the moment. Or, did you mean leave only the GPU in the current loop and remove the CPU from the loop? I may have misunderstood what you meant. I had a CPU AIO as a spare part, but I used it in a PC build for one of my sons so I don't have that available. And, I no longer have a CPU air cooler. I got rid of the one I had when I sold off some stuff that was just taking up space in my crowded little office. And... not only performance deficits to contend with, but also a general lack of stability and reliability that needs to be fixed. The WHEA errors and USB drop-out issue are not things I am likely to forgive and forget for a very long time. A system that produces nice benchmarks scores loses a lot of its sparkle when it is unreliable and frustrating to deal with for normal everyday use. The 5950X and Crosshair experience was my "21st Century Edsel" in the digital realm.
  16. That's a good idea, but I don't think I have the parts I would need to test that.
  17. What is puzzling about it is everything is freshly cleaned (filters, blocks, etc.) and now the temps are higher, particularly on the GPU. I can't really say for sure on the CPU because that is new and I have very little to compare with before/after. And, the loop flow rate is higher because of everything being fresly cleaned and because I reduced the QDC fittings restrictions when I redid the loop structure. It is about 70 L/H higher than it was before, yet it did not help anything that I can tell. But, the temps went the other direction coincidental to cleaning the Hydro Copper block. That happened before I restructured the loop and I think that is where I need to start the investigation. My guess is something with the mounting of the block to the GPU might not be ideal. That is only a guess right now though. There should not be a huge delta between the water temperatures and core temperatures if my logic is correct. If that is an accurate statement, it seems to point to mounting of the GPU block. Edit: It is also possible that using the Fujipoly pad on the entire back plate is affecting core temps negatively by "normalizing" the heat over the whole GPU. Sure, the memory might be 20°C cooler now, but it might be due to the memory forcing some of its heat onto the GPU core because now it has a pathway or conduit to do that which did not exist before.
  18. Maybe I said something in a way that was unclear, but I have not noticed any difference in flow rate based on temperature. It does not seem to vary by a lot. By that I mean it is not different today than yesterday, only that watching the meter over the course of a minute it will change up and down every few seconds and somewhere in the middle is the average and there isn't a big difference between the high/low/average. It is maybe just a little bit lower using anti-freeze than just straight distilled water, but not enough that I could quantify it with a number. I think that is due to the water being lower viscosity. One thing I am going to experiment with (haven't had time) is how using the EK manifold might improve flow rate through the GPU so it is not dependent on the volume of water that can move through the CPU. They could function independently and not have one affect the other. It won't change the flow rate of the loop necessarily, but if it is harder to move water through the CPU block, the GPU won't be starved because it can flow through both of them independently and simultaneously. I hope that makes sense. I don't have the manifold set up that way at the moment, but I plan to test the theory and compare both ways.
  19. Yes, but that was before I took it apart and cleaned the trash out of the Hydro Copper. GPU core is almost always above 40°C under load now. I have even seen it get into the high 50s, but usually in the low to mid 40s under load, in the low to mid 10s at idle. It used to drop into the single digits and stayed in the teens under load. The loop shows water temperature between 14-20°C and a flow rate between 190-220 L/H. The temperature and flow rate measurements are taken at the return line side of the loop, after having already passed through the CPU and GPU. Looking at it right now, without the chiller being used since last night, it is hovering around 30-31°C and 200-205 L/H flow rate.
  20. Brother @johnksss it looks like the temperature readings from 3DMark are a LOT lower on your GPU than mine are. It doesn't take more than 5-10°C to cause a pretty big drop in GPU core clocks, as we both know. I really hate that about modern GPUs. I wish they would hold their core max clock until 80°C like they did when things were still done right. Temperature target is essentially meaningless now because of how max core clocks are stair-stepped in stupid bins relating to temperature. https://hwbot.org/submission/4972225_mr._fox_3dmark___time_spy_geforce_rtx_3090_23047_marks/ https://hwbot.org/submission/4972233_mr._fox_3dmark___fire_strike_geforce_rtx_3090_43920_marks/
  21. I've given up on Port Royal for the time being. Things seem back to normal now. I can't get the clocks to hold, but they are at least holding steady again. Most of the time, 2175 is the highest clock I can get under load. It doesn't matter how high it starts, within a few seconds it is at 2175 and staying there through the end. I did get my best ever Time Spy and Fire Strike runs in last night, but Port Royal stays in the 15.5K range due to the limited core clock max. I am pretty sure the reduction in clock speed is temperature related. You seem to be able to keep your GPU 20-25°C colder than I can get mine and that is more than enough temperature difference to be an impediment to GPU benchmark scores. The water in my loop is staying cold enough (if the digital meter is accurate) but the core is getting between 40-50°C. I may try different thermal pads and liquid metal to see if that helps. I used KPX at the last GPU repaste. The issues I had with leaking after cleaning the block and having the o-ring slipping out of place required that I take it apart three times before I could get the o-ring to seal again, so I could also have a pad out of place that is interfering with contact. But, I have noticed my CPU temps are also higher using KPX than they are using Phobya, Cryo or MX4. I like how easy KPX is to use, but so far it doesn't seem to be as effective at transferring the heat. Everything I have used it on (both desktops and laptop) have higher temperatures using KPX. It may be ideal for sub-zero cooling because it doesn't harden and crack. I have purchased them a number of places, including eBay. I have purchased more from Kinguin.net than anywhere else. Sometimes a few bucks more than eBay, but always reliable (so far) for me. https://www.kinguin.net/listing?active=0&hideUnavailable=0&phrase=windows&page=0&size=25&sort=bestseller.total,DESC
  22. I must be missing something because the first picture you posted had clocks flatline in the 15,6K run posted earlier? That is because I had boost locked. The core was set to 2250 but only held 2160. Without boost locked it varied between about 1950 and 2190 but constantly changing and always substantially lower clocks than expected Me too. I hope so. Still need to verify it on chilled water at higher clocks first. But, it is definitely looking better.
  23. It seems like that is what the problem was. So, all this time I haven't been benching because I thought something was messed up with my GPU and it turns out it was because I forgot about them silly dip switches. I guess that makes ME the DIP, LOL. Here is a "hot" run (51°C with no chiller) using a heavy memory overclock and modest core overclock. Still static core clocks except for the 30MHz thermally induced drop in frequency. The sad part is, that is less than 700 3DMarks lower than my highest score achieved with the dip switches set to the "on" position.
  24. OK, here it is. It might have been those dip switches that I forgot about. Notice the GPU core clocks are a flat-line all the way across. I will know more when I fire up the chiller and see how it looks with an overclock, but that flat-line is how it used to look whether stock or up to my max stable 2250. It would stay exactly where I put it, or it would crash if it couldn't be pushed that far, but none of that retarded dynamic bouncy clocking bullcrap. I really don't like that spastic behavior at all, LOL... for CPU or GPU.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use