Jump to content
NotebookTalk

Aaron44126

Moderator
  • Posts

    2,102
  • Joined

  • Days Won

    30

Everything posted by Aaron44126

  1. Bottom right corner and unfortunately “invisible” (white icon on white background) unless you are using a dark theme. Hover around and you should see options pop out. You can switch themes with the drop down at the very bottom of any page.
  2. Larger XPS systems ship with 130W power adapters. As an example, the Core i9-11980H CPU can draw over 85W of power by itself under high load (turbo boost), so a 100W power adapter wouldn't leave you a lot of headroom for other devices (the GPU, the display, the drives, the fans, USB devices attached, ...). So, if you push the system under load then you will most likely see some CPU and/or GPU throttling with a 100W power adapter. Under a light/"office" workload then it would probably be fine. You'd also want to test how the XPS 17 behaves with an underpowered PSU connected. I think most recent Dell systems handle this reasonably well but I have seen some that throttle very aggressively in this situation. (So, only buy one of these if you can return it, in the case that it doesn't work out.) I'd say maybe this would be a nice thing to put in your travel bag, but keep the original 130W power adapter at your desk or wherever the primary spot is that you use your XPS. USB-C power delivery is still advancing. Just now, cables supporting up to 240W of power delivery over a single USB-C connection are starting to hit the market. Maybe we will see combo chargers like this that can output more than 100W over a single port in the not-too-distant future.
  3. I haven't tried streaming gaming yet but I have hard time believing that I'd be happy with it... Unlike streaming video where the endpoint client can buffer up several seconds or minutes of video, gaming has to be real time, so there is no room for buffering. Turnaround time from input -> transmit to data center -> run the game -> encode video output (compression artifacts!) -> send back to endpoint/client -> decode and display has to be what, 50 ms or less before you'd start noticing the lag? And no interruptions in the data flow at all. You'd have to have a really solid Internet connection.
  4. So I am having to replace the HVAC system in my home, and I am figuring that I will take the opportunity to investigate smart thermostat options (Nest, etc.). Anyone mess with any of these and have any feedback? I’m mostly interested in being able to set up schedules, control it remotely, and HomeKit support. Not necessarily interested in having it do things like tell me the weather, and definitely not interested in having it listen to me (Alexa, etc.). I’m investigating on my own but was sort of wondering if anyone in the crowd here has experience with one of these and has something that they would recommend or not recommend.
  5. Are you running at >100% display scaling in Windows? You could try disabling hardware graphics acceleration in Thunderbird and see if that makes a difference.
  6. "Lower temperature threshold" needs to be something that is achievable. (65 °C maybe?) The program will not try to lock the fan speed unless the temperatures are below that value. If it says "waiting for embedded controller to activate the fans", I guess that the fans are off (0 RPM)? The program can't lock in the fan speed until the fans are running. Wait for the EC to decide to turn them on, or run something that will generate a bit of load. (I never have to generate load. The system will kick them on eventually.)
  7. Hmm, interesting, I didn't realize that this was a thing. Regarding high-end GPUs, HP does make some Quadro RTX (Turing) MXM cards. The layout is off a bit but RTX 3000 has been demonstrated working in the M6700 (by drilling new holes through the GPU heatsink). It doesn't even have the vBIOS/BSOD issue the Pascal cards have in the M6700/M6800. RTX 5000 has the same layout so it should also work. I'd think that these cards would also work in other systems, if they physically fit (and with a heatsink mod). https://www.nbrchive.net/forum.notebookreview.com/threads/dell-precision-m6700-nvidia-turing-rtx-card-discussion-thread.833140/index.html (Sorry you can't really see the pictures. Still need to fix attachments in NBRCHIVE.) I also haven't heard a whisper about the possibility of Ampere MXM GPUs. You would think RTX A1000/A2000/A3000 at least would be possible on a regular card.
  8. If you did want to raise the power limit it would probably just require flashing a different vBIOS on there... If you could find the "official Dell" vBIOS for this GPU then that would be ideal. It's hard to get to the point of a temperature problem with these beefy NVIDIA GPU chips that are intended to run in desktops at higher power levels. When I tried P5000 in the M6700, it would pull around 110W and I was still never able to get it anywhere near thermal throttling. I have noted that it leveled off at around 76 °C when I put it under an extended load. (That was with the fan running full tilt, though.) Also, doesn't matter to you since you are done already, but to anyone else: the X-bracket is easier to get off if you apply some heat (heat gun or hair dryer).
  9. It's not a Windows problem. The BIOS can't see the drive either. I can go to the "Storage" section of BIOS setup and it will show no drive installed int he slot, and in the case of me moving the Windows drive into the PCIe4 slot, the BIOS throws an error about no boot volume available. (It will also be completely absent from Disk Management and Device Manager when this is going on with a non-Windows drive.) Sorry, I missed this post. The setting is under security and it is called something like "UEFI Capsule Firmware Updates". If it is on, you will see a "Firmware" category in Device Manager. If the "firmware" device gets updated with a new driver (which contains a new BIOS image) then the BIOS will be flashed. Microsoft sometimes pushes these down through Windows Update.
  10. 1.10.1 is removed as well. (Don't know if anyone noted that.) They're all the way back down to 1.5.0 as the only option. All of the newer ones have a note saying that once you upgrade, you can't downgrade below 1.8.0, so if there really is some issue with these newer ones, and you have upgraded to one of them, then you are stuck. I also noticed that MS is pushing either 1.10.1 or 1.11.0 down through Windows Update as of a few days ago. It could be automatically installed if you have capsule updates enabled in BIOS setup (which is the default configuration). If you would like to make sure that you are in control of when BIOS updates happen, turn that off.
  11. I have this thread on installing a M.2 Wi-Fi card in the M6700, it would be pretty much the same with M6800. You need U.FL to MHF4 antenna connectors/adapters as well. I have AX200 installed right now; I think AX210 should also work, but haven't tried it. https://www.nbrchive.net/forum.notebookreview.com/threads/m-2-ngff-wireless-cards-in-precision-m4x00-m6x00-my-experience-with-m6700.821863/ (I am pretty sure M6800 does not support XMP memory profiles. I don't think that Dell added that until Precision 7730.)
  12. The reason for this, I believe, is that 8GB 1866 modules were not available when the M6800 first released. Intel Ark says that 1600 is the max but I think 1866 will work (from other users' experience as I recall). Anyway, it doesn't matter that much. An increase in memory speed also yields an increase in CL and the performance difference will be negligible for most applications. Agree with @Hertzian56, the "MX" CPU is probably not worth it unless it hardly costs any more than a lower-end quad-core CPU. It will thermal throttle under an extended load and you'll end up with around the same speed anyway. I am not aware of any working BIOS unlock options for any Precision system. Don't think this matters that much either. The CPU turbos to the point of thermal throttle (no point overclocking). You can control the fans through other means. The closest I can think of to a BIOS mod would be @jeamn's attempt to get a GeForce 1070 booting; he managed a boot-time override of some of the BIOS tables. You want a 240W PSU to avoid throttling. (These era of Precision systems are pretty picky about this even if it seems like you have power headroom.) Look at PA-9E on eBay. They don't cost that much. I think the LVDS motherboard is fine if you are satisfied with the display panel and don't plan on installing a Pascal (or better) GPU. A 4K panel is not possible even with the eDP motherboard in this system. It doesn't have enough display bandwidth. If you ever need to replace the display panel then you might need to get the eDP board. Good 1080p LVDS panels are hard to find now. (I tried putting an AUO 1080p LVDS panel in my M6700 and it was pretty bad quality compared to my current LG panel, especially dealing with dark/blacks on the screen. I was not able to find the LG panel for sale anywhere when I was looking, about three years ago. I managed to get my LG panel fixed so I ended up not doing the swap and returning the AUO panel.) No idea about soldering on an eDP port. I have never heard of anyone trying something like that. Other considerations: M6700 had one-pipe and two-pipe versions for the CPU heatsink. I'm not sure if the same is true for the M6800, but if it is, try for the two-pipe version for better cooling. I think that 4th-gen is the first generation of CPUs that you can undervolt with Intel XTU or ThrottleStop? If you can undervolt it that will help with temperatures some as well. I also keep the CPU temps (and thus fans) under control by keeping turbo boost disabled when I don't have a high CPU load. You can do this with Windows advanced power options — set "max CPU utilization" to 99% and put the power slider (click battery by the clock) in the middle setting to disable turbo boost. Slide the slider to the right to enable turbo boost. Easy toggle. You can check to see if it is working by observing the clock speed in Task Manager "performance" tab.
  13. Well, that was a bust. My new configuration had the Windows / C drive in the PCIe4 slot. After booting up, I was in short order (≈5 minutes) greeted with a BSOD, and then a message from the BIOS saying that no bootable drive could be found. Opened the system up again and carefully reseated the drive. Same thing happened. Tried one more time, extra-carefully reseating the drive. Same thing (but this time it lasted a few hours). So. It's definitely the slot and not the drive. Replacing the motherboard doesn't seem to have fixed it. Best I can come up with is I'm somehow missing the "trick" to properly seat a drive in this slot with that plastic caddy thing, but it sure does look like it is snug in there to me. Anyway, I'm working on rearranging things so that I can work off of two drives and not three and just leave this slot empty for now.
  14. Eh, this is my work system and things are quite busy right now so I sort of want to spend as little time as possible dealing with this :-/. I need all three drives working so I’m not going to be experimenting with swapping different drive models in for now, if rearranging them “solves” the problem. I’m more concerned with stability than with the PCIe4 drive running at full speed. Diagnostics report nothing amiss. [Edit] Drive switcharoo complete, we'll see how it goes...
  15. Well, it didn't last that long. Now, I had the PCIe4 drive up and completely disappear again. This is on the version of the motherboard that does not have a physical switch by that slot. I'm going to be swapping the drives around to different slots, back to the configuration that I had working before — no PCIe4 drive in the PCIe4 slot. Seems like it could be an issue with my Sabrent PCIe4 drive, but the fact that I had the same drive disappearing issue with a coworker's system and a Samsung 980 Pro drive makes me doubt that. (Really hoping that the 7770 doesn't have this issue. I have four PCIe4 drives ready to put in there.)
  16. I remember seeing notable differences in some of the older models (7X30?), like the UHD panel being required to avoid 6-bits-per-color or to get a higher brightness panel. In the case for this generation, it looks like the two panels on offer for the 7770 have the same specs in terms of brightness and color coverage. The 7670 has two different FHD+ options and I think the baseline/cheaper one should be avoided (max brightness is quite low). Normally it would be the case that a lower resolution screen saves you battery life. UHD panels draw a bit more power and also require more "oomph" from the GPU in order to drive them. But... in the case of the OLED panel in the 7670, there is a whole new consideration to the panel's power use. In a normal LCD panel, the power use by the panel itself is fixed, changing mostly only based on the brightness level that you have selected. In an OLED panel, the power use is determined by the overall brightness of the image on the screen as the pixels are individually lit (there is no backlight). If you operate in mostly "dark mode" applications then you can keep the power use pretty low. OLED would also offer the best color with a huge contrast ratio (true blacks are possible as the individual pixels can be shut off). I'm getting a 7770 so OLED is not in consideration, but the UHD panel there does offer 120 Hz which is also a nice upgrade. Also, I really appreciate the increased sharpness of a UHD panel (clarity of text, detail in images, etc.) even though it does not offer increased working space when running it at 200% scaling, so that's the way that I'll be going for sure. I don't use the system on battery very often, though, so power use isn't factoring in as a consideration for me.
  17. Windows 11, version 22H2 might hit RTM this month. https://www.neowin.net/news/windows-11-22h2-to-reportedly-to-reach-rtm-this-month/ (..."RTM" doesn't have the same meaning that it used to, though.)
  18. Correct on the timeframe. Member list shows the count at 398 with new users joining daily.
  19. I’ll probably let it run with both “fixes” for about two weeks. And if I don’t have the problem in that time, I will then switch back to AHCI/NVMe mode and see what happens.
  20. Haven’t had the issue again yet, but I think it’s a bit too soon to say that it’s fixed.
  21. Still having issues with my PCIe4 drive. This is after a motherboard swap; the new one doesn't have the "pressure switch" by the PCIe4 slot. The issue is different than before. With my old motherboard, the PCIe4 drive would completely disappear. It would be gone from File Explorer, it would be gone from Device Manager, it would not even show in the BIOS setup "Storage" section, and I'd have to power off the PC to (maybe) get it back. With the new motherboard, I'm not seeing the drive disappear. It still shows up in Device Manager and File Explorer. The problem is, Windows stops recognizing the file system and reports it as "RAW", and then I cannot access the drive files. I can't find a way to get it to remount properly without a reboot. This has happened twice in the last week. This morning, I took some "try to maybe fix it" steps. Updated to BIOS 1.11.0. (This might have started with the 1.10.x update; I don't remember it happening with 1.5.0 which is what the board came with. I ran that version for a few weeks before trying to update.) I also switched from AHCI/NVMe mode to RAID to see if that makes a difference. (If it doesn't, I'll be switching right back.) If it continues to happen, I guess I'll have to try shuffling the drives around again so that the PCIe4 drive is in a PCIe3 slot. (Don't know what else to try.) We'll see what happens...
  22. Great, so if that holds true, and if one of the NVMe slots is directly connected to the CPU, then you could use three drives full speed and there would only be bandwidth contention on the 7770 if you tried to use all three "bottom" SSDs at the same time (but it would still give you two thirds bandwidth for each). Or, maybe there would be a little bit of bandwidth contention if you also tried to throw in something like the a high load on the Ethernet port. This also just makes me more interested in getting one of these in to mess with. 🙂 I do have four PCIe4 drives sitting here ready to go. I use VMware Workstation a lot so I have similar questions. I think in the long term there will be some work done to properly support P/E cores in a hypervisor (you would be able to allocate them separately for your VM and the Intel Thread Director stuff would be integrated between host and guest to help decide how to allocate load) but I am not aware that any work has really been done on this right now. I'm sort of expecting to find myself in a situation where Windows 10 really wants to put the VM load on the E cores because it doesn't recognize it as a "foreground process", and I'll have to manually set the CPU affinity of the vmware-vmx process... [Edit] Dug up a thread and found that it is just as I feared, but there is a way to solve it right in VMware configuration. But you can't really get it to balance between P and E cores, you basically have to pick one or the other. (Looks like this guy was testing on Windows 11 too, so Thread Director support doesn't really fix it.) https://communities.vmware.com/t5/VMware-Workstation-Pro/Workstation-16pro-on-alder-lake-system/td-p/2880327 Might be able to use VMware's focus priority feature to get Windows to switch between P cores or E cores depending on whether the VM has input focus or not. (That would probably be fine with me.) This may well be addressed in some way in a future update to VMware. ...Anyway, maybe Hyper-V will be smarter.
  23. There could also be benefits if you have only PCIe3 drives installed in the "shared bandwidth" slots... Since bandwidth between the PCH and CPU is also doubled, maybe you could run two of them at full speed now? (Will require some testing.) Anyway, I am planning on a Storage Spaces array for three drives in those slots, similar to the configuration you describe. A performance boost would be nice, but really just getting them grouped into one huge volume for simplicity of data management is my main goal.
  24. I doubt that this is happening. Alder Lake has 20 PCIe4 lanes off of the CPU and the rest are off of the PCH. There are limits to how these 20 lanes can be split out, too. I would imagine 16 are for the dGPU (even if it is just using an x8 connection) and the remaining four could be used for a single NVMe drive. Anything connected to the PCH basically shares a PCIe4 x4 link back to the CPU (if I understand properly). Probably, this is laid out similarly to last year’s systems with one NVMe drive connected directly to the CPU (at PCIe4) and the rest connected to the PCH. The difference here is that the Tiger Lake PCH capped out at PCIe3, while the Alder Lake PCH does have enough PCIe4 lanes to hook up the remaining NVMe slots at that speed. They’ll just have shared bandwidth back to the CPU. Will be able to confirm the architecture once someone has a system in hand. The 20 lanes off of the CPU can be split out as 8-8-4, so technically there could be two NVMe drives directly connected (assuming that they use x8 for the dGPU and not x16)… But given the motherboard layout with three NVMe drive slots grouped together, it would be odd if those were not all connected to the PCH.
  25. Apologies, I guess I mixed that up. The spec sheet just shows support up to 32GB ECC with SODIMM. I saw 64GB ECC in one of the spec leaks. (Very first CAMM reference, AFAIK.) Seems like there’s no reason that it would not work, but I haven’t seen anything specific about what DDR5 SODIMM ECC capacities will actually be available. Maybe Dell won’t be able to offer it (at first?) if no one is making 32GB ECC modules. I guess we can put it to @Dell-Mano_G. Sometimes the spec sheets are off. Will Dell offer/support 64GB ECC?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use