-
Posts
2,224 -
Joined
-
Days Won
31
Content Type
Profiles
Forums
Events
Everything posted by Aaron44126
-
You don't need a drive specific to this model laptop. You can use any SATA laptop optical drive that meets these criteria: 12.7mm height Slot loading Disc reading and recording features of your choice (DVD +/- RW, BD-R, etc...) Some options would be Philips/Lite-On DL-8A4SH12C and Toshiba TS-T633, these are easy enough to find on eBay/etc. If your new drive comes with a plastic bezel, remove it. (It just snaps off.) Take the old drive out, detach the mounting hardware and attach it to the new drive, and then put the new drive in.
-
Well this is just whacked. I went through a process to figure out how to make sure that the GPU is powering off when it is not needed in Linux. Now that this is "working", I am finding that the fan speed goes up when the GPU is powered off. If the GPU is on the fans will periodically turn off completely after dropping below 1000 RPM, but if the GPU is off then the fan speed floor is around 1150 RPM for "optimized" mode and 1300 RPM for "quiet" mode. I reproduced it multiple times. All that it takes to turn the GPU on and cause the fan speed to go down is to have something like a GPU temperature measurement going. I guess I should use a power meter to figure out if the GPU is "really" powering off (lower total system power use). But one way or another, it seems that something is off with Dell's implementation of either the GPU power cycle or fan curves when the GPU is off. Maybe this also explains why I saw the fans stop cycling off a few months ago back when I was using Windows.
-
I've done a bit of a "deep dive" on getting NVIDIA graphics switching working properly on Linux. For now I'm going to post everything that you need to know to make the GPU power down properly when not in use, which was a bit tricky for me to get working properly. I plan to update this post in the future with some tips for getting applications to run on a particular GPU, but I have found that the "automatic behavior" works decently well in this case. Make the GPU power down properly when not in use This is doable but the implementation seems to be a bit "brittle", it has to be configured just so, and it was not set up properly out-of-the-box in Ubuntu/Kubuntu. I've been sitting with the GPU always on and drawing 13-18W of power even when no programs are using it. Note here that I am assuming that the NVIDIA GPU PCI device ID in Linux is 01:00.0. This seems to be "standard" but you can check using the command "lspci". Some commands below will need to be tweaked if the PCI device ID is different. Also note that everything here most likely applies only to NVIDIA GPUs based on the Turing architecture or later. My understanding is that NVIDIA doesn't have support in their Linux driver for automatically powering off the GPU on Pascal and older architectures, so for those you will need to resort to older methods (Bumblebee bbswitch or direct ACPI commands) which are more messy. Right now I have only tested this on a GeForce RTX 3080 Ti laptop GPU, which is Ampere architecture. To check and see if the GPU is powered on or not, use: cat /sys/bus/pci/devices/0000\:01\:00.0/power/runtime_status If you get "active" then the GPU is powered on, and if you get "suspended" then it is properly powered down. If you get "active" when you think that no programs are using the GPU and it "should" be powered off, then there are a number of things to check. First, check the value here: cat /sys/bus/pci/devices/0000\:01\:00.0/power/control This should come back with the value "auto". If it instead returns "on" then the NVIDIA GPU will never power off. To get it set to "auto", proper udev rules need to be in place. # Enable runtime PM for NVIDIA VGA/3D controller devices on driver bind ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="auto" ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="auto" # Disable runtime PM for NVIDIA VGA/3D controller devices on driver unbind ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="on" ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="on" You can find the udev rules in /etc/udev/rules.d and /lib/udev/rules.d. If there is not an exiting rule that needs to be tweaked, you can add these in a new file. I put them in in a file named "/lib/udev/rules.d/80-nvidia-pm.rules". (All of this was borrowed from Arch.) Note that after making udev configuration changes, you should fire off: update-initramfs -u and then reboot. The next thing to check is the NVIDIA kernel module configuration. You can check the current kernel module parameters with this command: cat /proc/driver/nvidia/params The value for "DynamicPowerManagement" should be 2. (NVIDIA documentation indicates that "3" is also OK for Ampere and later architectures.) You can set this value with a rule in modprobe. Look in /etc/modprobe.d and /lib/modprobe.d for an existing rule to change, and if there is not one there then make a new file with the rule. You just need to add this line to a file ending with ".conf": options nvidia "NVreg_DynamicPowerManagement=0x02" ...and then run: update-initramfs -u and reboot. Use the command above to check the kernel module parameters and make sure that the change stuck. Finally, if the GPU still won't power off, you should confirm that there is nothing running in the background that could be using it. You can run the command "nvidia-smi" and it will print out some status information which includes a list of the processes using the NVIDIA GPU in the bottom. However, that may not be a sufficient check. In my case I discovered that having the Folding@Home client service active would cause the GPU to always stay powered on, even though it was not doing any GPU work and there was nothing in nvidia-smi showing Folding@Home using it. (Using "lsof" to look for processes accessing libcuda.so would catch that.) In "nvidia-smi", you might see the Xorg process using the GPU with a small amount of memory listed. Xorg automatically attaches to all available GPUs even if they are not driving a display. You can disable this behavior by adding this configuration blob to a file in /etc/X11/xorg.conf.d (filename ending with ".conf" if you want to add a new one): Section "ServerFlags" Option "AutoAddGPU" "off" EndSection This will cause Xorg to only attach to whatever GPU the BIOS thinks is the default. It might break displays attached to a different GPU if you have a multi-monitor configuration. ...In any case, this isn't necessary to actually get the GPU to power off. The GPU can still suspend with Xorg listed in the process list of "nvidia-smi" as long as it is not actually driving a display. Also note that running the command "nvidia-smi" causes the GPU to wake up and you will need to wait several seconds after running it before the GPU will return to its "suspended" state. This will also be the case for most other tools that monitor the NVIDIA GPU, like nvtop. (Just one other thing that could trick you up when checking to see if things are working right.) If you do have the GPU powering off properly, you might want to also check the power draw of the whole system. There are apparently a few laptops out there that have an incorrect BIOS configuration and end up drawing more power with the GPU "off" than with it "on" in a low-power state. (I've seen MSI mentioned specifically.)
-
M4800 Owner's Thread
Aaron44126 replied to unnoticed's topic in Pro Max & Precision Mobile Workstation
This is "normal Windows behavior" which I believe has been addressed in Windows 11, which is better at remembering your Window positions for multi-screen setups (and which I have not personally tried to see if it really works better). It might work better in this case if you make sure that the external screen is set as the "primary" screen when both displays are running. -
Windows 7/8/8.1 Mozilla has detailed plans for end of support. Firefox 115, releasing in July, will be the last version supported. Firefox 116 (August) will not support Windows 7/8/8.1. Firefox 115 will have an ESR version that will continue to get updates through September 2024, so Windows 7/8/8.1 users may use that to stretch out support.
-
Precision M4600 Owners Thread
Aaron44126 replied to Hertzian56's topic in Pro Max & Precision Mobile Workstation
That would be a normal thing to see if you have graphics switching / Optimus turned on. The display is attached to the Intel GPU in that case and there won't be very many options in NVIDIA control panel. -
Any NVMe drive should work. I’ve used a number of modules in multiple Precision systems and never had an issue. So, not sure what to say here. I have not tried this specific model. I do have an issue with the primary slot in my Precision 7560 (the PCIe4 one) failing to consistently recognize drives, so I don’t use it anymore. It’s still problematic after multiple motherboard replacements. It is expected that drives with a built-in heatsink will not fit. You should use a bare drive and the heatsink included with the system.
-
That "dark" part of the "lid part" of the laptop in the image above is the plastic bezel, which I guess maybe doesn't need to be quite that thick. It has some slightly rubbery material around the edge which I would presume protects the lid if you slam it shut quickly, which also adds 1mm or so to the height. The panel is recessed behind the bezel and only in the "light" part. (And, it doesn't really seem to be as thick as the image shows in person, at least to me...)
-
I use hibernate when I need to transport the laptop. Regarding the dGPU, you can disable/enable it from the command line with DevManView so a quick batch script will do. If you disable discrete display output in BIOS setup then external displays will attach to the Intel GPU and this will be an issue less often. For my system: DevManView.exe /disable "NVIDIA GeForce RTX 3080 Ti Laptop GPU" DevManView.exe /enable "NVIDIA GeForce RTX 3080 Ti Laptop GPU"
-
Precision 7540 & Precision 7740 owner's thread
Aaron44126 replied to SvenC's topic in Pro Max & Precision Mobile Workstation
Precision 7530 and 7540 use the same chassis so I wouldn't be surprised if that is the case. ...Probably more than you want to deal with, but it occurs to me that you could probably install a Precision 7530 "display assembly" on your Precision 7540 with no trouble at all, if it turned out that made it easier to swap around display panels. -
Precision 7540 & Precision 7740 owner's thread
Aaron44126 replied to SvenC's topic in Pro Max & Precision Mobile Workstation
The bracket will just have a hole where the screw goes, but it won't help affixing your panel to the chassis unless the chassis itself also has a place for the screw to go in. -
Precision 7540 & Precision 7740 owner's thread
Aaron44126 replied to SvenC's topic in Pro Max & Precision Mobile Workstation
Most panels that mount with screws have such brackets on them (the panel itself doesn't have the screw holes); whether it will be compatible with your lid depends on if there are "receptors" for the screws in the right place. (I have pulled panels for Precision 7510, 7530, and 7560 that were all mounted in such a way which is why I was surprised to find that Precision 7540 uses adhesive for the panel.) -
Alienware/Precision DGFF compatibility
Aaron44126 replied to Skeletor's topic in Pro Max & Precision Mobile Workstation
This is just accumulated knowledge from this site and also NBR. However, especially if you have done laptop GPU replacements before, you can just look at those DGFF cards and see that there is no way that the Alienware card will work in the Precision. The screws don't go in the same spots. There are no HDMI or DP ports. The shape and size are different (especially accounting for how you'd have to rotate that Alienware card to get the DGFF connectors in the same place). It simply will not fit.- 3 replies
-
- dell precision 7530
- dell precision 7540
- (and 2 more)
-
M4800 Owner's Thread
Aaron44126 replied to unnoticed's topic in Pro Max & Precision Mobile Workstation
The fans are slightly different sizes and have different enclosures, but the blade design is the same if you have fans from the same manufacturer. If you find a Delta CPU fan beneficial then you'd probably want a Delta GPU fan as well. The system runs the fans at roughly the same speed anyway. (I.e., a load on the CPU only will cause both the CPU and GPU fans to ramp up.) -
M4800 Owner's Thread
Aaron44126 replied to unnoticed's topic in Pro Max & Precision Mobile Workstation
There is at least this in post #6. Higher CFM will be beneficial as well if you care about noise. The fan can get the same work done at lower speed. Now I changed my cpu fan from the AVC #2 to the Delta fan #3, not only is the fan quieter but also a lot more powerful. The AVC fan ramped its speed to max making a horrible noise while the Delta fan roars off but still has plenty more to give if you manually control the fan speed and if you put your hand in front of the outlet grill you can already feel a hard air flow while the AVC barely tickled your face on full speed. -
M4800 Owner's Thread
Aaron44126 replied to unnoticed's topic in Pro Max & Precision Mobile Workstation
For information on the types of fans available, see this thread and posts by @unnoticed. Delta fan is generally preferred. It is normal for the CPU to run hotter than the GPU with the same wattage. The CPU is harder to keep cool because it concentrates power in tiny "hot spots" where the CPU cores are (a relatively small portion of the die as a whole), while the GPU distributes power use more-or-less across the entire die. (GPU dies tend to be bigger than CPU dies anyway.) -
Alienware/Precision DGFF compatibility
Aaron44126 replied to Skeletor's topic in Pro Max & Precision Mobile Workstation
There is no compatibility between Precision and Alienware DGFF cards. In fact, there is no compatibility between Precision DGFF cards between models with different chassis designs either. The card shape and component layout are completely different even if the connector is the same. The only inter-generational upgrade that has been confirmed to work is using Precision 7X40 cards in a Precision 7X30 system.- 3 replies
-
- dell precision 7530
- dell precision 7540
- (and 2 more)
-
The provided AC adapter for the Precision 7680 is not USB-C, it is the standard Dell "barrel"-style power connector and it plugs into the left side. If you don't mind connecting two cables, you are free to connect the stock AC adapter to the system and use a dock at the same time. When you do so, you do not need to rely on power from the dock. You can connect a single-cable dock like WD22TB, or just one of the connectors for the WD19DCS, and the system will still run fine. You only need to connect both of the WD19DCS connectors if you want "single-cable docking" (no standalone AC adapter connected). .....Do note that connecting only one of the WD19DCS connectors might also cause issues if you plan on using a high-bandwidth display setup (i.e. multiple 4K displays). A Thunderbolt dock like the WD22TB should be able to handle this over a single cable, though.
-
Can't adjust brightness on M4800
Aaron44126 replied to K4sum1's topic in Pro Max & Precision Mobile Workstation
This sort of issue popped up semi-regularly on NBR back when these systems were "old enough to have GPU replacements done" but not yet "really old" (2015-2018 timeframe?). There isn't a whole lot of discussion of the pre-7000 series systems here on NBT. -
Can't adjust brightness on M4800
Aaron44126 replied to K4sum1's topic in Pro Max & Precision Mobile Workstation
This is a common problem with an "unsupported GPU" in this system. You should be able to set the brightness level in the BIOS. If you're running with Optimus / graphics switching turned on, it should work but you might have to use the Intel GPU driver package from Dell, not the one built-in to Windows or from Intel's web site. If you're running with Optimus / graphics switching turned off, I don't know if there is really a solution to this. 😕 -
Dell has made 330W power adapters for Alienware systems and you can plug one into these systems and it will work… but not provide any performance uplift. (At least, that was the case with Precision 7770.) You can provide feedback but I don’t think Dell wants to push the power envelope higher on these systems. It would require larger size for a bigger cooling system as well — and they have been trying to *shrink* the chassis with each successive generation. In any case, Dell is aware of feedback happening on this site, but they are also aware that the power users that typically visit here don’t represent the majority of Dell’s business customers. (We are due for a chassis refresh next year, and with 18” 16:10 systems starting to become normal to see, I’m hoping we will at least see that…)
-
This is normal and has been for the past several generations. If you check in NVIDIA control panel "Help->System Information" you will probably see that the power limit is the same between the 4000 and 5000 cards. Performance could be better with the 5000 GPU if you are doing something to take advantage of its extra capabilities (more vRAM, more tensor cores, ...), but otherwise there is no reason to expect it to be much "faster" than the 4000 GPU.