Jump to content
NotebookTalk

Mr. Fox

Member
  • Posts

    4,613
  • Joined

  • Days Won

    475

Everything posted by Mr. Fox

  1. I hope the scalping bastard pigs that hoarded GPUs and drove the prices to insane levels lose many millions or even billions of dollars and flood the market with more GPUs than the market can come even remotely close to handling. It would be great if GPUs started selling for like 20% of MSRP with a supply that is 5 or 10 times greater than the demand, LOL. Imagine getting a 3060 for $150 or a 3090 for $350, and as many as you want. That would be awesome. 3090 SLI for benching. 😁 Edit: with a BIG power supply. No 500W gamerboy PSU.
  2. Unrelated matter, last night I was looking at a lot of things closely and noticed the Cablemods sleeved cables were getting kind of scary looking. I already knew some of the 24-pin motherboard wires had melted insulation from the 7980XE overclocking making the cables overheat. I saw the same thing starting to happen on all three 8-pin GPU cables. One more so than the other two. So, I installed the brand new EVGA 8-pin GPU and 24-pin motherboard cables that came with the 1600W PSU that I had never used.
  3. That is the problem right now. I have no spare parts available that I could use for testing at the moment. I sold off lots of the spare parts collecting dust and used cash from those sales to help pay for new stuff I use daily. Spare CPUs, motherboards, air coolers, RAM, waterblocks, etc. were sold or given away. I even built an entire desktop from spare parts and sold that. The only thing I had to acquire was a mid-tower case, and I got one for free from a relative that had no use for it. Here is the system I built and sold using spare parts with the free case. I made $650 off of parts sitting in boxes in my closet that were not being used for anything.
  4. That is always true, especially for GPU. Even without overclocking, lower temps generally result in higher GPU core clocks. If you are enjoying record-breaking results with your laptop, might be best to not "fix it" right now. It might not perform as well if you disturb things. I would say leave it alone if it is working well and hold off on changing anything until your results start trending in the opposite direction.
  5. Yeah, that makes perfect sense. In fact, that was part of my thought using the EK manifold since the CPU and GPU can flow independently (in parallel rather than series) and have totally unique flow rates. To test that I need to remove an internal plug from a chamber and relocate one QDC fitting to another port on the manifold, so that should be easy to do as soon as I take time to do it. Otherwise, I will have to spend money to buy parts I do not have just for testing. I may do that anyway. I can probably grab stuff from a yard sale or Craigslist cheap enough.
  6. Yes, I agree. I need to primarily focus on the GPU core temps on chilled water, but I don't think I have the parts I need to isolate the CPU from the loop. It would not make any sense to test the GPU without chilled water because that is not an issue that needs to be resolved. It's not an overall issue either. It's only that I can't get it cold enough for higher GPU core clock when benching. It's perfectly fine for everything else other than that. And, there is a huge difference between the temperature of the chilled water in the loop and the GPU core. I think that is where it seems logical to start the witch hunt. Probably focusing first on the contact point and TIM sitting between the GPU block and the GPU core. There is roughly a 20-25°C delta there.
  7. I don't have anything I can use on the GPU that I can think of. I could use the pump/block from the KPE AIO if you are finished using it, but I do not have a spare GPU water block and do not know how I could cool the GPU without the Hydro Copper block on it at the moment. Or, did you mean leave only the GPU in the current loop and remove the CPU from the loop? I may have misunderstood what you meant. I had a CPU AIO as a spare part, but I used it in a PC build for one of my sons so I don't have that available. And, I no longer have a CPU air cooler. I got rid of the one I had when I sold off some stuff that was just taking up space in my crowded little office. And... not only performance deficits to contend with, but also a general lack of stability and reliability that needs to be fixed. The WHEA errors and USB drop-out issue are not things I am likely to forgive and forget for a very long time. A system that produces nice benchmarks scores loses a lot of its sparkle when it is unreliable and frustrating to deal with for normal everyday use. The 5950X and Crosshair experience was my "21st Century Edsel" in the digital realm.
  8. That's a good idea, but I don't think I have the parts I would need to test that.
  9. What is puzzling about it is everything is freshly cleaned (filters, blocks, etc.) and now the temps are higher, particularly on the GPU. I can't really say for sure on the CPU because that is new and I have very little to compare with before/after. And, the loop flow rate is higher because of everything being fresly cleaned and because I reduced the QDC fittings restrictions when I redid the loop structure. It is about 70 L/H higher than it was before, yet it did not help anything that I can tell. But, the temps went the other direction coincidental to cleaning the Hydro Copper block. That happened before I restructured the loop and I think that is where I need to start the investigation. My guess is something with the mounting of the block to the GPU might not be ideal. That is only a guess right now though. There should not be a huge delta between the water temperatures and core temperatures if my logic is correct. If that is an accurate statement, it seems to point to mounting of the GPU block. Edit: It is also possible that using the Fujipoly pad on the entire back plate is affecting core temps negatively by "normalizing" the heat over the whole GPU. Sure, the memory might be 20°C cooler now, but it might be due to the memory forcing some of its heat onto the GPU core because now it has a pathway or conduit to do that which did not exist before.
  10. Maybe I said something in a way that was unclear, but I have not noticed any difference in flow rate based on temperature. It does not seem to vary by a lot. By that I mean it is not different today than yesterday, only that watching the meter over the course of a minute it will change up and down every few seconds and somewhere in the middle is the average and there isn't a big difference between the high/low/average. It is maybe just a little bit lower using anti-freeze than just straight distilled water, but not enough that I could quantify it with a number. I think that is due to the water being lower viscosity. One thing I am going to experiment with (haven't had time) is how using the EK manifold might improve flow rate through the GPU so it is not dependent on the volume of water that can move through the CPU. They could function independently and not have one affect the other. It won't change the flow rate of the loop necessarily, but if it is harder to move water through the CPU block, the GPU won't be starved because it can flow through both of them independently and simultaneously. I hope that makes sense. I don't have the manifold set up that way at the moment, but I plan to test the theory and compare both ways.
  11. Yes, but that was before I took it apart and cleaned the trash out of the Hydro Copper. GPU core is almost always above 40°C under load now. I have even seen it get into the high 50s, but usually in the low to mid 40s under load, in the low to mid 10s at idle. It used to drop into the single digits and stayed in the teens under load. The loop shows water temperature between 14-20°C and a flow rate between 190-220 L/H. The temperature and flow rate measurements are taken at the return line side of the loop, after having already passed through the CPU and GPU. Looking at it right now, without the chiller being used since last night, it is hovering around 30-31°C and 200-205 L/H flow rate.
  12. Brother @johnksss it looks like the temperature readings from 3DMark are a LOT lower on your GPU than mine are. It doesn't take more than 5-10°C to cause a pretty big drop in GPU core clocks, as we both know. I really hate that about modern GPUs. I wish they would hold their core max clock until 80°C like they did when things were still done right. Temperature target is essentially meaningless now because of how max core clocks are stair-stepped in stupid bins relating to temperature. https://hwbot.org/submission/4972225_mr._fox_3dmark___time_spy_geforce_rtx_3090_23047_marks/ https://hwbot.org/submission/4972233_mr._fox_3dmark___fire_strike_geforce_rtx_3090_43920_marks/
  13. I've given up on Port Royal for the time being. Things seem back to normal now. I can't get the clocks to hold, but they are at least holding steady again. Most of the time, 2175 is the highest clock I can get under load. It doesn't matter how high it starts, within a few seconds it is at 2175 and staying there through the end. I did get my best ever Time Spy and Fire Strike runs in last night, but Port Royal stays in the 15.5K range due to the limited core clock max. I am pretty sure the reduction in clock speed is temperature related. You seem to be able to keep your GPU 20-25°C colder than I can get mine and that is more than enough temperature difference to be an impediment to GPU benchmark scores. The water in my loop is staying cold enough (if the digital meter is accurate) but the core is getting between 40-50°C. I may try different thermal pads and liquid metal to see if that helps. I used KPX at the last GPU repaste. The issues I had with leaking after cleaning the block and having the o-ring slipping out of place required that I take it apart three times before I could get the o-ring to seal again, so I could also have a pad out of place that is interfering with contact. But, I have noticed my CPU temps are also higher using KPX than they are using Phobya, Cryo or MX4. I like how easy KPX is to use, but so far it doesn't seem to be as effective at transferring the heat. Everything I have used it on (both desktops and laptop) have higher temperatures using KPX. It may be ideal for sub-zero cooling because it doesn't harden and crack. I have purchased them a number of places, including eBay. I have purchased more from Kinguin.net than anywhere else. Sometimes a few bucks more than eBay, but always reliable (so far) for me. https://www.kinguin.net/listing?active=0&hideUnavailable=0&phrase=windows&page=0&size=25&sort=bestseller.total,DESC
  14. I must be missing something because the first picture you posted had clocks flatline in the 15,6K run posted earlier? That is because I had boost locked. The core was set to 2250 but only held 2160. Without boost locked it varied between about 1950 and 2190 but constantly changing and always substantially lower clocks than expected Me too. I hope so. Still need to verify it on chilled water at higher clocks first. But, it is definitely looking better.
  15. It seems like that is what the problem was. So, all this time I haven't been benching because I thought something was messed up with my GPU and it turns out it was because I forgot about them silly dip switches. I guess that makes ME the DIP, LOL. Here is a "hot" run (51°C with no chiller) using a heavy memory overclock and modest core overclock. Still static core clocks except for the 30MHz thermally induced drop in frequency. The sad part is, that is less than 700 3DMarks lower than my highest score achieved with the dip switches set to the "on" position.
  16. OK, here it is. It might have been those dip switches that I forgot about. Notice the GPU core clocks are a flat-line all the way across. I will know more when I fire up the chiller and see how it looks with an overclock, but that flat-line is how it used to look whether stock or up to my max stable 2250. It would stay exactly where I put it, or it would crash if it couldn't be pushed that far, but none of that retarded dynamic bouncy clocking bullcrap. I really don't like that spastic behavior at all, LOL... for CPU or GPU.
  17. I will do that later this evening. I am not at my computer right now. Just max the watts slider with PX1 and leave everything else on defaults?
  18. I just totally spaced it off until now about the dip switches. I don't remember if I was on X570 Crosshair or Z590 Dark when I took the block apart for cleaning, but in either case that is when I changed them. At the time I thought, I should test this and see what changes. I got busy with other things and didn't do any benching for a while. Then when I came back to it all my scores were lower and the GPU core clocks were dropping like they never had before. I just pulled the GPU so I could see the dip switches and put them all back to the "off" position. So, two of the suggestions were handled in that one operation. GPU is running x16 4.0. Are your defaults in Classified and PX1 with the 520W vBIOS the same as this? Boost Lock On: Boost Lock Off (and Classified Tool reset after that change, since Boost Lock changes that in Classified):
  19. I may try that, but I have 2 versions of W10, W11 and W7 and the behavior is the same on all of them. It is not only Port Royal, but all benchmarks right now. It is only coincidental that I am focusing on Port Royal, which is kind of ironic because it is the one I care the least about. All of the OSes are clean installs (in an effort to fix this, in fact). I just remembered another change I made when I cleaned the block. I had totally forgotten about it. Actually two changes. I can try undoing both. One is the full back plate thermal pad. The other (which I will try first) is I set all of the dip switches to the optional position. That may be conflicting with using Classified and causing something unexpected. Are your dip switches set to factory defaults? I totally forgot about changing the dip switches. That could be it right there.
  20. If that is the case, maybe an "error" in assembly the first time I installed the Hydro Copper produced a favorable malfunction this was unfortunately corrected when I reassembled it after cleaning. Maybe I will get lucky and make the same "mistake" again, like over tightening a specific screw or shorting something out that causes it to malfunction in a highly desirable manner that neither EVGA nor NVIDIA intended. I hope so anyway. Sometimes malfunction is a good thing, LOL. Depends on whose definition applies. I certainly don't like it how it is now. I haven't been very successful at GPU benchmarks since I cleaned it. Maybe just a remnant of my streak of bad luck. At least it still works. So I can be thankful I didn't have an accident that killed it like I did the 11900K or the golden 10900K I bought from Talon.
  21. I will turn it off when I am chasing numbers after I figure out how to get my GPU to perform as well or better than it used to and not drop clocks under load. It may be something at a hardware level because this behavior started when I disassembled and cleaned the Hydro Cooper. I didn't have the issue of not holding clocks under load before and it acts like it might be a thermal issue. It behaves like it would as the GPU gets hotter, not running on cold water, but none of the sensors suggest it is. This began before the change to Z690, so at least I don't have to wonder if it is something other than the GPU.
  22. Rivatuner Statistics Server OSD so I can see what the core clocks, power and voltage are doing during a benchmark run.
  23. About 1,000 points less now. Starting at 2265 on core, it runs about 1975 max and bounces around like a pinball with more than 520W vBIOS limit showing in RTSS OSD. Let me try lowering the voltage and see if it gets better or worse. Edit 1: Lowered voltage and that put me back were I was with the XOC vBIOS. About 15.5K in Port Royal. If I want to see 2100 on core, I have to set the offset for 2265 because the actual is about 150MHz less than what is requested or set. It seems like it NEVER does what you tell it to. You always get less so need to demand more than you expect. (Gee, that sounds like almost every area of life now, LOL. If you want something barely good enough you have to demand more and have ludicrous expectations.) Edit 2: So it looks like the new way is to figure out have much false information you have to use with PX1 just to get the GPU to pull a half-assed run without going too far beyond what works and what wont. Nice. (not really) I guess it's time to start over on learning what I can get away with. I just wish I knew what changed with the XOC vBIOS no longer holding boost clocks under load. I liked that a WHOLE LOT better than this gamer-boy vBIOS crap.
  24. I am not sure that even shows as an option. That, or I just stopped looking at the Re-Bar tab because I didn't care. I ignore almost everything that is made for gaming unless I think it offers something that might bump my benchmark scores. I don't game enough to really care anymore. If there is any thought that it might hurt my benchmark scores (crap like G-Stink) then it's not only no, but hell no, LOL.
  25. I just remembered that and did it. Thanks. That's pretty handy.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use