-
Posts
2,236 -
Joined
-
Days Won
31
Content Type
Profiles
Forums
Events
Everything posted by Aaron44126
-
The current Alder Lake model has two NVMe slots only. (I looked it up just a bit ago, and it makes sense as this is a limitation of Alder Lake H has a much more limited PCIe setup than Alder Lake HX.) I don't know about older models, it wouldn't surprise me if they have previously offered four slots. Heck, I used to be somewhat interested in the Alienware 17" line when they also offered four NVMe slots, but that is also gone now. Intel can sort of be thanked for this reduction in PCIe capability. Alder Lake H has a much more limited setup than Tiger Lake H did.
-
Lenovo has only put out 16" for this generation. Razer's is only Alder Lake H (as @1610ftw mentioned). No 128GB, no 4Γ NVMe β these require Alder Lake HX. (No numeric keypad!) The only other manufacturer offering a sort-of real 17" workstation is MSI, and I haven't heard great things about their business-level support. There's also how long support goes on for. Dell is still releasing BIOS and driver updates for Precision 7X10 (7 years old!). HP is a runner up since they have somehow managed to cram 4Γ NVMe into their 16" ZBook. (Any info on how the GPU performs in that system? HP is sort of known for gimping that too.) We can whine and complain about the Precision 7770, but as far as proper 17" workstations go, there isn't really any competition. π Can't even pretend to use a 17" gaming system as a workstation since most of those are also just using Alder Lake H and are thus limited in the RAM and storage department.
-
Manipulating the GPU clock speed Observation: When gaming, I've been noticing some inconsistent performance. The GPU doesn't seem to react to a change in load "immediately". For example, playing a regular third-person game with a game controller where the right stick controls the camera... Everything is working fine at 60+ FPS. However, if I swing the right stick to snap the camera around quickly, that induces a motion blur effect which is more "expensive" for the GPU to process, and the game stutters for a few moments. It recovers, so if I spin the camera in circles for a while everything is smooth, but the initial stutter is obnoxious. (I've noticed this in two different games.) Hypothesis: The GPU needs to ramp the speed up to consistently handle the more expensive motion blur effect, but it takes it a good several frames to "realize" this and make the adjustment, resulting in a choppy framerate for maybe half a second. (This is just one example. I've noticed other cases where the GPU seems to be lagging briefly when the demand increases suddenly. In some situations β basically, emulators, or poorly-written games β it can lead to audio stuttering as well.) So, I'm looking for a way to set the GPU clock to a fixed (high) speed and not let it adjust up and down dynamically based on the load, to see if that fixes this problem. I don't necessarily care about the power efficiency implications in this case. NVIDIA would have you do this with the nvidia-smi command line tool. You can just do: nvidia-smi -ac <memory clock>,<base clock> ...and those clocks will be locked in unless it needs to throttle due to heat/power. Other interesting commands are: nvidia-smi -q -d SUPPORTED_CLOCKS (list out supported clock speeds) nvidia-smi -rac (reset clock speed to default/automatic) But, this method only works with pro GPUs, so if you have the GeForce then you are out of luck. (Sort of wish that I sprung for the A5500.) So, I found this ten-year-old guide that uses NVIDIA Inspector. https://www.overclock.net/threads/guide-nvidia-inspector-gtx670-680-disable-boost-fixed-clock-speed-undervolting.1267918/ (Click "Continue reading" to expand the top post, don't just skip it and scroll down to the thread.) I've already tried using this command to force the dGPU to the P0 power state, and if I do that, the clock speed is locked β but at 420 MHz, way too slow. nvidiaInspector.exe -forcepstate:0,0 The guide recommends using the P2 power state which lets you set the clock to a fixed value. (P0 has you adjust the clock speed using an offset, not a fixed value.) If I force P2, the clock speed is also locked at 420 MHz. But, if I try to adjust it as the linked guide above suggests, it just bounces right back to 420 MHz. This is the case with both the NVIDIA Inspector GUI and the command-line interface. I can't figure out how to adjust the base clock speed while the GPU is in the P2 state. (I am able to adjust the memory clock, just not the base clock.) So, back to trying P0, which allows you to change the clock using an offset and not a discrete value. The offset it allows you to pick from goes all of the way up to +1,000 MHz, though, so that's not terrible. This command successfully locks the dGPU in the P0 power state and a fixed base clock speed of 1,417 MHz. nvidiaInspector.exe -forcepstate:0,0 -setBaseClockOffset:0,0,1000 This command resets things back to normal. nvidiaInspector.exe -forcepstate:0,16 -setBaseClockOffset:0,0,0 (Trying to set the offset higher than 1,000 doesn't work. This limitation can vary by GPU and is imposed by the vBIOS.) I'd like to go a little higher than 1,417 MHz. (My 3DMark run had it staying over 1,500 MHz the whole time.) But, not the worst situation. 1,417 MHz is higher than the "advertised" boost clock of 1,395 MHz. And, if it solves my inconsistency/stuttering problem, then I might be willing to accept the trade-off. For gaming, I care more about consistency than absolute top performance. I'll play with it while gaming this evening. More experimentation to come. Just dropping this here in case any of you guys also want to try messing with the GPU clock. Let me know if you find a way to force it higher than 1,417 MHz. π Using P2 would be ideal. Maybe there is some trick required to set the speed in P2 on newer GPUs that I am not finding. Note β NVIDIA Inspector is a very useful tool, but it hasn't been updated in forever. The NVIDIA Inspector command-line interface is broken on Windows 10 because it tries to use the wrong version of .NET Framework. To fix it, just remove the "nvidiaInspector.exe.config" file. The program still works fine without this file, it will revert to a "default" version of .NET Framework.
-
Ran 3DMark Time Spy and got 11,081 β the teensiest better than my original run when I first got the system. https://www.3dmark.com/3dm/81802568? [Edit] GPU-Z is showing resizable BAR is showing "Disabled" in Windows 10 and "Enabled" in Windows 11 (where I ran that benchmark). GPU-Z shows "Yes" for everything related to resizable BAR in Windows 10, yet it is still showing Disabled. (Not going to worry any more about it. Clearly it didn't impact performance very much.)
-
I've found Task Manager reporting of the GPU usage to be unreliable, I prefer to use the graph in the "Sensors" tab of GPU-Z if I really care about it. (Though maybe Microsoft is able to get a better read on it with whatever they're doing if hardware scheduling is enabled...) Actually I haven't checked myself, I was going off of other reports that I have seen posted, I believe @MyPC8MyBrain actually reported getting the highest CB results on the "Cool" setting. But, maybe those were just short runs. I really only bounce between the "Quiet" and "Ultra performance" modes myself. Well, odd. My NV control panel doesn't show it either, I was going off of earlier recollection β I know for sure that I checked this shortly after receiving the system. NV control panel doesn't show Dynamic Boost 2.0 either, and I know that was showing there before too. Further, my Precision 7560 does show both of these things and shows that they are both enabled. (The Precision 7560 has the same NVIDIA driver version and same Windows version as my 7770.) Maybe one of the more recent Precision 7770 firmware updates changed something here? Anyway, I realized that you can also check resizable bar status in GPU-Z and that is showing "Disabled" right now. So, here shortly I will try to enable it with your tweak and see if that flips.
-
New Wi-Fi & Bluetooth drivers today. https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=HGCNG https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=Y9PHH Now, I've got some catching up to do... That's interesting, but my system is already showing resizable BAR enabled (in NVIDIA control panel / system info). I can also see the "Large Memory Range" resource on the dGPU in Device Manager which is the indicator that rBAR is working. Was there indication that it was not enabled for you before ...? I keep turbo boost disabled (Windows setting, not a BIOS setting) when I am not doing heavy work. (I have a whole article about this, linked in my sig.) Of course, I have turbo boost enabled for benchmarks & gaming. I haven't noticed any difference between the Dell-provided drivers and the ones straight from NVIDIA. Performance should be the same, though in the odd case that NVIDIA has a significant performance improvement in a driver upgrade (i.e. 522.xx series improves performance for DirectX 12 games) then you will have to wait a bit for that to trickle down to Dell's releases. I generally recommend to stick with Dell's as they have been tested/certified for these laptops, unless you are into gaming, and then maybe you want more current ones. Thanks but, once again, as Dell's fan control implementation is completely separate from the Intel IME stuff I rather doubt that changing this will make any difference at all. I guess it's a different issue then. I can say for sure that this is not happening on my 7770. How is your mouse attached (USB or Bluetooth or ...)? Can you reproduce the issue just using the touchpad? I'd suggest maybe pop Task Manager open to the "Details" tab, increase the poll rate (see menus at the top), and see if you can pin a spike in CPU use of a specific process to the mouse blips. Otherwise, it might take Windows Performance Analyzer to figure out what is going on here. Also odd. I'm definitely able to fully utilize the GPU with hardware-accelerated GPU scheduling disabled. "Cool" seems to throttle the GPU but not the CPU.
-
You can use them for internal, external, or network drives if you like. I had a B: SSD drive for a while, no trouble with it at all.
-
@brunooo84 I played with the C-states options and found that disabling dGPU C-states causes a definite hit to dGPU performance. The top C-states option seems fine to disable. (This one has existed in the Precision BIOS for a decade or more.) The bottom one regarding dGPU C-states is new and seems to actually cap the dGPU speed. Don't turn that one off.
-
I was poking around in the BIOS to mess with C-states and I noticed another option in the performance section labeled "Intel maximum turbo boost 3.0" or something like that. It says that it will enable the CPU to boost beyond the max turbo speed, but also disable NVIDIA Dynamic Boost 2.0. It is disabled by default. Has anyone messed with that ...?
-
Your image has a mismatched subsystem ID. This is pretty normal in laptops. The subsystem ID is essentially assigned by the motherboard, not by the GPU. This is why you need to mod the INF to get the NVIDIA driver to load if you, for example, take a MXM GPU card from a newer system and drop it into an older system. Its subsystem ID actually changes depending on the system that it is in. The value "burned in" to the BIOS might not match the value that is observed while the card is up and running. We can even pick out the values from the nvflash output. 10DE = NVIDIA's vendor ID 2420 = NVIDIA's identifier for GeForce RTX 3080 Ti (laptop version) 0000 = A filler value... 1028 = Dell's vendor ID 0B2A = Dell's identifier for Precision 7670. (You see these same values if you poke around NVIDIA's driver INF files.) Anyway. Comparing your image and my image, they mismatch on the subsystem ID and also the board ID. You can override the subsystem ID mismatch (-6) but not the board ID mismatch. nvflash has an option to override the board ID mismatch (-5), but, NVIDIA stopped allowing it with Turing cards. So, you need a patched version of nvflash that ignores this mismatch in order to complete the cross-flash. The problem is that the 3080Ti is "too new" and a patched version of nvflash that supports it has yet to surface. I think that it will inevitably appear. Maybe in just a few weeks/months when people want to start cross-flashing GeForce 4000 (desktop) cards and someone is motivated to mod a newer version of nvflash. In the "old days" you could modify the vBIOS file itself and make all sorts of things work. You could just change the IDs in the vBIOS file to match what you needed (as well as arbitrarily set the clock speeds, power limits, etc.). My own flashing experience comes from messing with overclocking the old Quadro K5000M. Anyone could produce vBIOS images to make it run faster than stock. Since Pascal, NVIDIA GPUs are enforcing digital signatures on the vBIOS, so only NVIDIA can produce valid vBIOS images.
-
NVIDIA Inspector shows it on the main window. P0 is "best performance", P8 is the low-performance "idle" state, and there are some in between states. NVIDIA Inspector also has command-line flags and you can force the P-state to a certain value using those. We've taken notice in the 7X70 thread that Dell blocks P0 if you have the system set to the "quiet" or "cool" thermal mode so you take a big GPU performance hit in those modes. Got me thinking about if something is causing the GPU here to drop to a lower mode or if it actually staying in P0 but at a reduced clock speed.
-
You just want to have them send the "same part" you already have... So far, I think that everyone who has tried has ended up receiving the Sunon part. They should be able to figure out the part number (they shouldn't even be allowed to send you a part that isn't already in your system), but if they can't, you can find it yourself. Go to dell.com/support, put in your service tag number, and then click "View product specs" on the next page and it will take you to the full parts list. Look under the GPU section for one labeled "ASSY,HTSNK".
-
I am curious about this, it seems pretty odd. I am wondering if maybe just an early/testing vBIOS was included on their card by mistake. Did this user give the version number for the vBIOS in their system that was reporting 157W TDP? It should be visible on the very same screen in NVIDIA control panel where you can see the card's max TDP. NVIDIA is good about never reusing the same version number and kicking out a new one for each vBIOS variation that they produce. (Just another thing that could maybe be used to quickly tell which vBIOS you have, or maybe even tease out which one is "newer" than the other.)
-
If anything it will improve performance at the cost of higher power use / lower battery life. It prevents the CPU from dropping to lower power states in certain conditions. ...Though I did some testing on my Precision 7530 which had this issue and I found the impact to both performance and battery life to be pretty negligible.