Jump to content
NotebookTalk

Mr. Fox

Member
  • Posts

    5,341
  • Joined

  • Days Won

    565

Everything posted by Mr. Fox

  1. I don't have anything I can use on the GPU that I can think of. I could use the pump/block from the KPE AIO if you are finished using it, but I do not have a spare GPU water block and do not know how I could cool the GPU without the Hydro Copper block on it at the moment. Or, did you mean leave only the GPU in the current loop and remove the CPU from the loop? I may have misunderstood what you meant. I had a CPU AIO as a spare part, but I used it in a PC build for one of my sons so I don't have that available. And, I no longer have a CPU air cooler. I got rid of the one I had when I sold off some stuff that was just taking up space in my crowded little office. And... not only performance deficits to contend with, but also a general lack of stability and reliability that needs to be fixed. The WHEA errors and USB drop-out issue are not things I am likely to forgive and forget for a very long time. A system that produces nice benchmarks scores loses a lot of its sparkle when it is unreliable and frustrating to deal with for normal everyday use. The 5950X and Crosshair experience was my "21st Century Edsel" in the digital realm.
  2. That's a good idea, but I don't think I have the parts I would need to test that.
  3. What is puzzling about it is everything is freshly cleaned (filters, blocks, etc.) and now the temps are higher, particularly on the GPU. I can't really say for sure on the CPU because that is new and I have very little to compare with before/after. And, the loop flow rate is higher because of everything being fresly cleaned and because I reduced the QDC fittings restrictions when I redid the loop structure. It is about 70 L/H higher than it was before, yet it did not help anything that I can tell. But, the temps went the other direction coincidental to cleaning the Hydro Copper block. That happened before I restructured the loop and I think that is where I need to start the investigation. My guess is something with the mounting of the block to the GPU might not be ideal. That is only a guess right now though. There should not be a huge delta between the water temperatures and core temperatures if my logic is correct. If that is an accurate statement, it seems to point to mounting of the GPU block. Edit: It is also possible that using the Fujipoly pad on the entire back plate is affecting core temps negatively by "normalizing" the heat over the whole GPU. Sure, the memory might be 20°C cooler now, but it might be due to the memory forcing some of its heat onto the GPU core because now it has a pathway or conduit to do that which did not exist before.
  4. Maybe I said something in a way that was unclear, but I have not noticed any difference in flow rate based on temperature. It does not seem to vary by a lot. By that I mean it is not different today than yesterday, only that watching the meter over the course of a minute it will change up and down every few seconds and somewhere in the middle is the average and there isn't a big difference between the high/low/average. It is maybe just a little bit lower using anti-freeze than just straight distilled water, but not enough that I could quantify it with a number. I think that is due to the water being lower viscosity. One thing I am going to experiment with (haven't had time) is how using the EK manifold might improve flow rate through the GPU so it is not dependent on the volume of water that can move through the CPU. They could function independently and not have one affect the other. It won't change the flow rate of the loop necessarily, but if it is harder to move water through the CPU block, the GPU won't be starved because it can flow through both of them independently and simultaneously. I hope that makes sense. I don't have the manifold set up that way at the moment, but I plan to test the theory and compare both ways.
  5. Yes, but that was before I took it apart and cleaned the trash out of the Hydro Copper. GPU core is almost always above 40°C under load now. I have even seen it get into the high 50s, but usually in the low to mid 40s under load, in the low to mid 10s at idle. It used to drop into the single digits and stayed in the teens under load. The loop shows water temperature between 14-20°C and a flow rate between 190-220 L/H. The temperature and flow rate measurements are taken at the return line side of the loop, after having already passed through the CPU and GPU. Looking at it right now, without the chiller being used since last night, it is hovering around 30-31°C and 200-205 L/H flow rate.
  6. Brother @johnksss it looks like the temperature readings from 3DMark are a LOT lower on your GPU than mine are. It doesn't take more than 5-10°C to cause a pretty big drop in GPU core clocks, as we both know. I really hate that about modern GPUs. I wish they would hold their core max clock until 80°C like they did when things were still done right. Temperature target is essentially meaningless now because of how max core clocks are stair-stepped in stupid bins relating to temperature. https://hwbot.org/submission/4972225_mr._fox_3dmark___time_spy_geforce_rtx_3090_23047_marks/ https://hwbot.org/submission/4972233_mr._fox_3dmark___fire_strike_geforce_rtx_3090_43920_marks/
  7. I've given up on Port Royal for the time being. Things seem back to normal now. I can't get the clocks to hold, but they are at least holding steady again. Most of the time, 2175 is the highest clock I can get under load. It doesn't matter how high it starts, within a few seconds it is at 2175 and staying there through the end. I did get my best ever Time Spy and Fire Strike runs in last night, but Port Royal stays in the 15.5K range due to the limited core clock max. I am pretty sure the reduction in clock speed is temperature related. You seem to be able to keep your GPU 20-25°C colder than I can get mine and that is more than enough temperature difference to be an impediment to GPU benchmark scores. The water in my loop is staying cold enough (if the digital meter is accurate) but the core is getting between 40-50°C. I may try different thermal pads and liquid metal to see if that helps. I used KPX at the last GPU repaste. The issues I had with leaking after cleaning the block and having the o-ring slipping out of place required that I take it apart three times before I could get the o-ring to seal again, so I could also have a pad out of place that is interfering with contact. But, I have noticed my CPU temps are also higher using KPX than they are using Phobya, Cryo or MX4. I like how easy KPX is to use, but so far it doesn't seem to be as effective at transferring the heat. Everything I have used it on (both desktops and laptop) have higher temperatures using KPX. It may be ideal for sub-zero cooling because it doesn't harden and crack. I have purchased them a number of places, including eBay. I have purchased more from Kinguin.net than anywhere else. Sometimes a few bucks more than eBay, but always reliable (so far) for me. https://www.kinguin.net/listing?active=0&hideUnavailable=0&phrase=windows&page=0&size=25&sort=bestseller.total,DESC
  8. I must be missing something because the first picture you posted had clocks flatline in the 15,6K run posted earlier? That is because I had boost locked. The core was set to 2250 but only held 2160. Without boost locked it varied between about 1950 and 2190 but constantly changing and always substantially lower clocks than expected Me too. I hope so. Still need to verify it on chilled water at higher clocks first. But, it is definitely looking better.
  9. It seems like that is what the problem was. So, all this time I haven't been benching because I thought something was messed up with my GPU and it turns out it was because I forgot about them silly dip switches. I guess that makes ME the DIP, LOL. Here is a "hot" run (51°C with no chiller) using a heavy memory overclock and modest core overclock. Still static core clocks except for the 30MHz thermally induced drop in frequency. The sad part is, that is less than 700 3DMarks lower than my highest score achieved with the dip switches set to the "on" position.
  10. OK, here it is. It might have been those dip switches that I forgot about. Notice the GPU core clocks are a flat-line all the way across. I will know more when I fire up the chiller and see how it looks with an overclock, but that flat-line is how it used to look whether stock or up to my max stable 2250. It would stay exactly where I put it, or it would crash if it couldn't be pushed that far, but none of that retarded dynamic bouncy clocking bullcrap. I really don't like that spastic behavior at all, LOL... for CPU or GPU.
  11. I will do that later this evening. I am not at my computer right now. Just max the watts slider with PX1 and leave everything else on defaults?
  12. I just totally spaced it off until now about the dip switches. I don't remember if I was on X570 Crosshair or Z590 Dark when I took the block apart for cleaning, but in either case that is when I changed them. At the time I thought, I should test this and see what changes. I got busy with other things and didn't do any benching for a while. Then when I came back to it all my scores were lower and the GPU core clocks were dropping like they never had before. I just pulled the GPU so I could see the dip switches and put them all back to the "off" position. So, two of the suggestions were handled in that one operation. GPU is running x16 4.0. Are your defaults in Classified and PX1 with the 520W vBIOS the same as this? Boost Lock On: Boost Lock Off (and Classified Tool reset after that change, since Boost Lock changes that in Classified):
  13. I may try that, but I have 2 versions of W10, W11 and W7 and the behavior is the same on all of them. It is not only Port Royal, but all benchmarks right now. It is only coincidental that I am focusing on Port Royal, which is kind of ironic because it is the one I care the least about. All of the OSes are clean installs (in an effort to fix this, in fact). I just remembered another change I made when I cleaned the block. I had totally forgotten about it. Actually two changes. I can try undoing both. One is the full back plate thermal pad. The other (which I will try first) is I set all of the dip switches to the optional position. That may be conflicting with using Classified and causing something unexpected. Are your dip switches set to factory defaults? I totally forgot about changing the dip switches. That could be it right there.
  14. If that is the case, maybe an "error" in assembly the first time I installed the Hydro Copper produced a favorable malfunction this was unfortunately corrected when I reassembled it after cleaning. Maybe I will get lucky and make the same "mistake" again, like over tightening a specific screw or shorting something out that causes it to malfunction in a highly desirable manner that neither EVGA nor NVIDIA intended. I hope so anyway. Sometimes malfunction is a good thing, LOL. Depends on whose definition applies. I certainly don't like it how it is now. I haven't been very successful at GPU benchmarks since I cleaned it. Maybe just a remnant of my streak of bad luck. At least it still works. So I can be thankful I didn't have an accident that killed it like I did the 11900K or the golden 10900K I bought from Talon.
  15. I will turn it off when I am chasing numbers after I figure out how to get my GPU to perform as well or better than it used to and not drop clocks under load. It may be something at a hardware level because this behavior started when I disassembled and cleaned the Hydro Cooper. I didn't have the issue of not holding clocks under load before and it acts like it might be a thermal issue. It behaves like it would as the GPU gets hotter, not running on cold water, but none of the sensors suggest it is. This began before the change to Z690, so at least I don't have to wonder if it is something other than the GPU.
  16. Rivatuner Statistics Server OSD so I can see what the core clocks, power and voltage are doing during a benchmark run.
  17. About 1,000 points less now. Starting at 2265 on core, it runs about 1975 max and bounces around like a pinball with more than 520W vBIOS limit showing in RTSS OSD. Let me try lowering the voltage and see if it gets better or worse. Edit 1: Lowered voltage and that put me back were I was with the XOC vBIOS. About 15.5K in Port Royal. If I want to see 2100 on core, I have to set the offset for 2265 because the actual is about 150MHz less than what is requested or set. It seems like it NEVER does what you tell it to. You always get less so need to demand more than you expect. (Gee, that sounds like almost every area of life now, LOL. If you want something barely good enough you have to demand more and have ludicrous expectations.) Edit 2: So it looks like the new way is to figure out have much false information you have to use with PX1 just to get the GPU to pull a half-assed run without going too far beyond what works and what wont. Nice. (not really) I guess it's time to start over on learning what I can get away with. I just wish I knew what changed with the XOC vBIOS no longer holding boost clocks under load. I liked that a WHOLE LOT better than this gamer-boy vBIOS crap.
  18. I am not sure that even shows as an option. That, or I just stopped looking at the Re-Bar tab because I didn't care. I ignore almost everything that is made for gaming unless I think it offers something that might bump my benchmark scores. I don't game enough to really care anymore. If there is any thought that it might hurt my benchmark scores (crap like G-Stink) then it's not only no, but hell no, LOL.
  19. I just remembered that and did it. Thanks. That's pretty handy.
  20. I don't know. Never tried it. I've basically ignored anything other than the XOC firmware since the day Vince sent it because my benchmarks are all pulling way beyond 520W. I have tried tons of different drives and it doesn't seem to be driver related. Is there a 520W with rebar support? I will give it a shot and see if it is the same, better or worse.
  21. I just don't get this. More voltage, less voltage, load clock sucks. Max core temp is under 40°C. Pretty much kills benching fun.
  22. I have relatives that are school teachers, and some of them are very intelligent, so there are always exceptions. But, the concept here is similar to the old cliché, "Those who can do, those who cannot teach." Another way of putting it, "When you have no go, focus on show." It is the sleepers you should be worried about. They'll catch you off guard and the outcome can be lethal. RGB is like sword-rattling... using a toy sword.
  23. I think where it has the potential to be most misleading is where overclocking headroom is concerned. I've noticed that to some extent where the CPU binning process is concerned. I've also seen it in the past with GPU ASIC scores. The silicon with the better numbers almost always runs cooler with lower voltage at stock or near stock. But once in a while the "better" silicon doesn't overclock very well or as high no matter how much voltage you give to it. If you mostly play games or if you're running a turdbook with horrible thermal management, overclocking probably isn't something that you're going to lose a lot of sleep over. But, if you're into overclocking more than anything else it's kind of tragic. I'd rather have a sample that overclocks higher and performs better even though it takes more voltage and it's harder to cool. But I wouldn't if I was the kind of person I just described and overclocking wasn't my primary interest.
  24. OK, the BIOS and vBIOS changes didn't break my Linux installation, so that is good.
  25. That did it. I guess the latest version with DWM is the problem. Thank you.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use