Jump to content
NotebookTalk

Aaron44126

Moderator
  • Posts

    2,102
  • Joined

  • Days Won

    30

Everything posted by Aaron44126

  1. Did you flash the engineering sample vBIOS? It is required for both M6700 and M6800. The link is in a post by @TheQuentincc just a little ways up from here.
  2. Yeah, so my perspective is that you're giving it too much credit. It's not an intelligence at all, just a machine. A sophisticated one, sure, but also one that was built by people who can understand how it works. My background — When I was finishing up my master's degree (≈15 years ago) I did a fair amount of studying of AI. I've did some projects that had to do with predicting the weather, and teaching it to get better at board games, and such. I built a neural network literally from scratch (not using "off-the-shelf" libraries) that could identify the number of open spaces in a parking lot given a photo, so I do understand the fundamentals. And in learning about this stuff, my general impression of AI went quickly from "Wow, computers are really smart" to more like "Wow, it's actually pretty simple once you know how it works, it just looks smart because of the massive scale". That sentiment has been echoed to me by multiple colleagues who have done work in AI. The techniques have gotten more sophisticated since then, sure, but the bulk of the advancement has come not so much from radically new methods of building AI, but rather from basically increasing hardware power and the general passage of time allowing for the training of larger and larger AI models. Now, AI has made some notable mistakes. With literally millions of neural network node weights it is generally difficult to figure out why exactly the network came to the conclusion that it did by examining the network directly, but at the same time it is not by just taking the time to think about it. A chatbot becomes racist because it "reads" racist content online or people interact with it that way. Or more critically, a Tesla crashes into the side of a semi-truck because Elon Musk (brilliantly) thought that it was fine to use visual data rather than LIDAR for self-drive and because of the lighting or whatever it couldn't differentiate the side of the truck from clear sky, which is a mistake that could kill someone. Garbage in, garbage out, as they say. In the end, it was a human that made the mistake and the machine just did its thing. I'm not trying to dismiss the dangers of AI. It will definitely be able to do some things better/faster than people can. I'm really worried about jobs and misinformation as I stated before. And the other thing would be letting people who don't have a full understanding of the capabilities/non-capabilities allow AI to be used for critical decision making. I'm just trying to point out while it is getting better, there are very real limitations to what it can do and what it can't do that will take a long time yet to overcome (if they ever are). Those limitations become more clear if you take the time to understand how this stuff actually works, and I would recommend that anyone who is "worried" about AI take the time to do that. With regards to the supposed drone simulation, that's just not making sense. With this sort of thing you would give the AI a "goal" (maximize enemy kill count?) by assigning points to various outcomes. This is exactly what they did when teaching AI to master chess, Go, and StarCraft II. Then, you would run an absurd amount of simulations with the AI taking random approaches, scoring itself based on the rules set, and over time "learning" which approaches take you from the "current state" to a "more favorable state" and eventually your desired outcome ("win the game" / "kill the enemy") with a high degree of certainty. Maybe the AI in such a training scenario would "discover" by random chance that destroying the communications tower and eliminating the pilot would lead to a better outcome. This would be quickly discovered by analyzing the results of the training runs and then the engineers would correct the "rules" by assigning massive negative points for this behavior and start the training from scratch. It would not be discovered by chance during a random one-off real-time simulation. The AI has not had a chance to "learn" that taking these actions resulted in a better outcome if it has not been trained. It's not a human so it can't just determine these things through intuition. I rather suspect the guy who gave the presentation was on the non-technical side, interpreted something he saw or heard wrong, or basically just didn't understand the difference between a thought experiment and a real simulation.
  3. This appears to be a bogus story. (Indeed, it seems to be something more out of science fiction and it is easy to poke holes in.) Some AI stories/videos I've run across recently. AI in education (...Actually pretty cool, if it can be pulled off.) AI researcher on tripping up ChatGPT 4 with simple queries (Despite how "smart" and "confident" it sounds, ChatGPT is basically spitting out amalgamations of its training data – that's exactly how AI neural networks work – and not actually "thinking" like a person does. Throwing more training data at it won't necessarily fix this because "common sense" isn't something that is spelled out on the Internet for a machine to pick up and "learn".) Geoffrey Hinton (long-time researcher, at Google from 2013 until pretty recently) warning about dangers of AI Right now, I'm less worried about "death and destruction" caused by AI and more worried about it displacing real human jobs in certain sectors, and also stirring up the misinformation scene even more than it is already. These are real problems happening now and likely to get significantly worse over the next few years. There doesn't seem to be anything in the works to seriously address either problem, either.
  4. You don't need a drive specific to this model laptop. You can use any SATA laptop optical drive that meets these criteria: 12.7mm height Slot loading Disc reading and recording features of your choice (DVD +/- RW, BD-R, etc...) Some options would be Philips/Lite-On DL-8A4SH12C and Toshiba TS-T633, these are easy enough to find on eBay/etc. If your new drive comes with a plastic bezel, remove it. (It just snaps off.) Take the old drive out, detach the mounting hardware and attach it to the new drive, and then put the new drive in.
  5. Well this is just whacked. I went through a process to figure out how to make sure that the GPU is powering off when it is not needed in Linux. Now that this is "working", I am finding that the fan speed goes up when the GPU is powered off. If the GPU is on the fans will periodically turn off completely after dropping below 1000 RPM, but if the GPU is off then the fan speed floor is around 1150 RPM for "optimized" mode and 1300 RPM for "quiet" mode. I reproduced it multiple times. All that it takes to turn the GPU on and cause the fan speed to go down is to have something like a GPU temperature measurement going. I guess I should use a power meter to figure out if the GPU is "really" powering off (lower total system power use). But one way or another, it seems that something is off with Dell's implementation of either the GPU power cycle or fan curves when the GPU is off. Maybe this also explains why I saw the fans stop cycling off a few months ago back when I was using Windows.
  6. I've done a bit of a "deep dive" on getting NVIDIA graphics switching working properly on Linux. For now I'm going to post everything that you need to know to make the GPU power down properly when not in use, which was a bit tricky for me to get working properly. I plan to update this post in the future with some tips for getting applications to run on a particular GPU, but I have found that the "automatic behavior" works decently well in this case. Make the GPU power down properly when not in use This is doable but the implementation seems to be a bit "brittle", it has to be configured just so, and it was not set up properly out-of-the-box in Ubuntu/Kubuntu. I've been sitting with the GPU always on and drawing 13-18W of power even when no programs are using it. Note here that I am assuming that the NVIDIA GPU PCI device ID in Linux is 01:00.0. This seems to be "standard" but you can check using the command "lspci". Some commands below will need to be tweaked if the PCI device ID is different. Also note that everything here most likely applies only to NVIDIA GPUs based on the Turing architecture or later. My understanding is that NVIDIA doesn't have support in their Linux driver for automatically powering off the GPU on Pascal and older architectures, so for those you will need to resort to older methods (Bumblebee bbswitch or direct ACPI commands) which are more messy. Right now I have only tested this on a GeForce RTX 3080 Ti laptop GPU, which is Ampere architecture. To check and see if the GPU is powered on or not, use: cat /sys/bus/pci/devices/0000\:01\:00.0/power/runtime_status If you get "active" then the GPU is powered on, and if you get "suspended" then it is properly powered down. If you get "active" when you think that no programs are using the GPU and it "should" be powered off, then there are a number of things to check. First, check the value here: cat /sys/bus/pci/devices/0000\:01\:00.0/power/control This should come back with the value "auto". If it instead returns "on" then the NVIDIA GPU will never power off. To get it set to "auto", proper udev rules need to be in place. # Enable runtime PM for NVIDIA VGA/3D controller devices on driver bind ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="auto" ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="auto" # Disable runtime PM for NVIDIA VGA/3D controller devices on driver unbind ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="on" ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="on" You can find the udev rules in /etc/udev/rules.d and /lib/udev/rules.d. If there is not an exiting rule that needs to be tweaked, you can add these in a new file. I put them in in a file named "/lib/udev/rules.d/80-nvidia-pm.rules". (All of this was borrowed from Arch.) Note that after making udev configuration changes, you should fire off: update-initramfs -u and then reboot. The next thing to check is the NVIDIA kernel module configuration. You can check the current kernel module parameters with this command: cat /proc/driver/nvidia/params The value for "DynamicPowerManagement" should be 2. (NVIDIA documentation indicates that "3" is also OK for Ampere and later architectures.) You can set this value with a rule in modprobe. Look in /etc/modprobe.d and /lib/modprobe.d for an existing rule to change, and if there is not one there then make a new file with the rule. You just need to add this line to a file ending with ".conf": options nvidia "NVreg_DynamicPowerManagement=0x02" ...and then run: update-initramfs -u and reboot. Use the command above to check the kernel module parameters and make sure that the change stuck. Finally, if the GPU still won't power off, you should confirm that there is nothing running in the background that could be using it. You can run the command "nvidia-smi" and it will print out some status information which includes a list of the processes using the NVIDIA GPU in the bottom. However, that may not be a sufficient check. In my case I discovered that having the Folding@Home client service active would cause the GPU to always stay powered on, even though it was not doing any GPU work and there was nothing in nvidia-smi showing Folding@Home using it. (Using "lsof" to look for processes accessing libcuda.so would catch that.) In "nvidia-smi", you might see the Xorg process using the GPU with a small amount of memory listed. Xorg automatically attaches to all available GPUs even if they are not driving a display. You can disable this behavior by adding this configuration blob to a file in /etc/X11/xorg.conf.d (filename ending with ".conf" if you want to add a new one): Section "ServerFlags" Option "AutoAddGPU" "off" EndSection This will cause Xorg to only attach to whatever GPU the BIOS thinks is the default. It might break displays attached to a different GPU if you have a multi-monitor configuration. ...In any case, this isn't necessary to actually get the GPU to power off. The GPU can still suspend with Xorg listed in the process list of "nvidia-smi" as long as it is not actually driving a display. Also note that running the command "nvidia-smi" causes the GPU to wake up and you will need to wait several seconds after running it before the GPU will return to its "suspended" state. This will also be the case for most other tools that monitor the NVIDIA GPU, like nvtop. (Just one other thing that could trick you up when checking to see if things are working right.) If you do have the GPU powering off properly, you might want to also check the power draw of the whole system. There are apparently a few laptops out there that have an incorrect BIOS configuration and end up drawing more power with the GPU "off" than with it "on" in a low-power state. (I've seen MSI mentioned specifically.)
  7. This is "normal Windows behavior" which I believe has been addressed in Windows 11, which is better at remembering your Window positions for multi-screen setups (and which I have not personally tried to see if it really works better). It might work better in this case if you make sure that the external screen is set as the "primary" screen when both displays are running.
  8. Windows 7/8/8.1 Mozilla has detailed plans for end of support. Firefox 115, releasing in July, will be the last version supported. Firefox 116 (August) will not support Windows 7/8/8.1. Firefox 115 will have an ESR version that will continue to get updates through September 2024, so Windows 7/8/8.1 users may use that to stretch out support.
  9. That would be a normal thing to see if you have graphics switching / Optimus turned on. The display is attached to the Intel GPU in that case and there won't be very many options in NVIDIA control panel.
  10. Any NVMe drive should work. I’ve used a number of modules in multiple Precision systems and never had an issue. So, not sure what to say here. I have not tried this specific model. I do have an issue with the primary slot in my Precision 7560 (the PCIe4 one) failing to consistently recognize drives, so I don’t use it anymore. It’s still problematic after multiple motherboard replacements. It is expected that drives with a built-in heatsink will not fit. You should use a bare drive and the heatsink included with the system.
  11. That "dark" part of the "lid part" of the laptop in the image above is the plastic bezel, which I guess maybe doesn't need to be quite that thick. It has some slightly rubbery material around the edge which I would presume protects the lid if you slam it shut quickly, which also adds 1mm or so to the height. The panel is recessed behind the bezel and only in the "light" part. (And, it doesn't really seem to be as thick as the image shows in person, at least to me...)
  12. I use hibernate when I need to transport the laptop. Regarding the dGPU, you can disable/enable it from the command line with DevManView so a quick batch script will do. If you disable discrete display output in BIOS setup then external displays will attach to the Intel GPU and this will be an issue less often. For my system: DevManView.exe /disable "NVIDIA GeForce RTX 3080 Ti Laptop GPU" DevManView.exe /enable "NVIDIA GeForce RTX 3080 Ti Laptop GPU"
  13. Precision 7530 and 7540 use the same chassis so I wouldn't be surprised if that is the case. ...Probably more than you want to deal with, but it occurs to me that you could probably install a Precision 7530 "display assembly" on your Precision 7540 with no trouble at all, if it turned out that made it easier to swap around display panels.
  14. The bracket will just have a hole where the screw goes, but it won't help affixing your panel to the chassis unless the chassis itself also has a place for the screw to go in.
  15. Most panels that mount with screws have such brackets on them (the panel itself doesn't have the screw holes); whether it will be compatible with your lid depends on if there are "receptors" for the screws in the right place. (I have pulled panels for Precision 7510, 7530, and 7560 that were all mounted in such a way which is why I was surprised to find that Precision 7540 uses adhesive for the panel.)
  16. This is just accumulated knowledge from this site and also NBR. However, especially if you have done laptop GPU replacements before, you can just look at those DGFF cards and see that there is no way that the Alienware card will work in the Precision. The screws don't go in the same spots. There are no HDMI or DP ports. The shape and size are different (especially accounting for how you'd have to rotate that Alienware card to get the DGFF connectors in the same place). It simply will not fit.
  17. The fans are slightly different sizes and have different enclosures, but the blade design is the same if you have fans from the same manufacturer. If you find a Delta CPU fan beneficial then you'd probably want a Delta GPU fan as well. The system runs the fans at roughly the same speed anyway. (I.e., a load on the CPU only will cause both the CPU and GPU fans to ramp up.)
  18. There is at least this in post #6. Higher CFM will be beneficial as well if you care about noise. The fan can get the same work done at lower speed. Now I changed my cpu fan from the AVC #2 to the Delta fan #3, not only is the fan quieter but also a lot more powerful. The AVC fan ramped its speed to max making a horrible noise while the Delta fan roars off but still has plenty more to give if you manually control the fan speed and if you put your hand in front of the outlet grill you can already feel a hard air flow while the AVC barely tickled your face on full speed.
  19. For information on the types of fans available, see this thread and posts by @unnoticed. Delta fan is generally preferred. It is normal for the CPU to run hotter than the GPU with the same wattage. The CPU is harder to keep cool because it concentrates power in tiny "hot spots" where the CPU cores are (a relatively small portion of the die as a whole), while the GPU distributes power use more-or-less across the entire die. (GPU dies tend to be bigger than CPU dies anyway.)
  20. There is no compatibility between Precision and Alienware DGFF cards. In fact, there is no compatibility between Precision DGFF cards between models with different chassis designs either. The card shape and component layout are completely different even if the connector is the same. The only inter-generational upgrade that has been confirmed to work is using Precision 7X40 cards in a Precision 7X30 system.
  21. The provided AC adapter for the Precision 7680 is not USB-C, it is the standard Dell "barrel"-style power connector and it plugs into the left side. If you don't mind connecting two cables, you are free to connect the stock AC adapter to the system and use a dock at the same time. When you do so, you do not need to rely on power from the dock. You can connect a single-cable dock like WD22TB, or just one of the connectors for the WD19DCS, and the system will still run fine. You only need to connect both of the WD19DCS connectors if you want "single-cable docking" (no standalone AC adapter connected). .....Do note that connecting only one of the WD19DCS connectors might also cause issues if you plan on using a high-bandwidth display setup (i.e. multiple 4K displays). A Thunderbolt dock like the WD22TB should be able to handle this over a single cable, though.
  22. This sort of issue popped up semi-regularly on NBR back when these systems were "old enough to have GPU replacements done" but not yet "really old" (2015-2018 timeframe?). There isn't a whole lot of discussion of the pre-7000 series systems here on NBT.
  23. This is a common problem with an "unsupported GPU" in this system. You should be able to set the brightness level in the BIOS. If you're running with Optimus / graphics switching turned on, it should work but you might have to use the Intel GPU driver package from Dell, not the one built-in to Windows or from Intel's web site. If you're running with Optimus / graphics switching turned off, I don't know if there is really a solution to this. 😕
  24. I have not had this problem on my system since I disabled “c states for discrete GPU” and also enabled “Intel maximum turbo boost 3.0”, which supposedly disables NVIDIA Dynamic Boost 2.0.
  25. Dell has made 330W power adapters for Alienware systems and you can plug one into these systems and it will work… but not provide any performance uplift. (At least, that was the case with Precision 7770.) You can provide feedback but I don’t think Dell wants to push the power envelope higher on these systems. It would require larger size for a bigger cooling system as well — and they have been trying to *shrink* the chassis with each successive generation. In any case, Dell is aware of feedback happening on this site, but they are also aware that the power users that typically visit here don’t represent the majority of Dell’s business customers. (We are due for a chassis refresh next year, and with 18” 16:10 systems starting to become normal to see, I’m hoping we will at least see that…)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use