Jump to content
NotebookTalk

Aaron44126

Moderator
  • Posts

    2,207
  • Joined

  • Days Won

    31

Everything posted by Aaron44126

  1. Maybe disappointing, but this is not new or unexpected. Last year's systems were reporting 140W max TGP but effectively capped lower as well (I forget what exactly, 110-115W in Precision 7760?). @Dell-Mano_G doesn't manage the support reps so I doubt he will have much input as to why certain support experiences played out the way that they did. I suspect he knew that there would be those of us disappointed with the power levels which may be part of the reason that they were not divulged ahead of launch (as they have been in the past), and even at launch we only got CPU+GPU combo limits and not the individual limits. I haven't seen Dell advertising 150W GPU power limit anywhere. In any case, it's unusual for Dell to make any changes or feature additions to a product after launch, so I figured that getting them to unlock the TGP or change the fan control behavior was a long shot at best. (...They will fix things, though, as we have seen with the IA AC/DC loadline fix for example — that might have even come as a direct result of the posts in this forum.) The sad state of affairs is, every system out there right now for this generation is compromised in some way, so you have to do some research and figure out which one has the downsides that bother you least. There are some 17" gaming-focused systems that might offer higher total raw performance. I'm only aware of three Alder Lake laptops that offer 4× NVMe slots. And if you want business-class support, you're really just looking at Lenovo, HP, and Dell. Of those, only HP and Dell are offering 4× NVMe (important to me right now, maybe less so if 16TB drives become available)... I haven't looked at HP too close but I suspect their GPU power level is even more gimped than the Precision 7770's (that's normally the case for them). HP does have a BIOS option to keep the fans from powering off, but that forces them to run at around ≈2400 RPM at the lowest level (so I hear), too noisy for idle workloads. And anyway, only Dell is offering 17", so there's really no other choice. I don't run super performance-heavy stuff, just moderate gaming and occasionally video encoding, and I've been OK with what I'm getting out of this system even if the benchmarks don't top the charts. (I do have a bulk data archiving project going on which is why I need all these drives!)
  2. Eh, sounds like even in Russia people have been laughing at this guy for years.
  3. Like I posted above in the thread somewhere. I have watched interviews and I used to have a high opinion of this guy. His recent actions speak for themselves, though.
  4. If you turn off the schedule and just turn night light on manually from the control center, will it stay on?
  5. If you have it set up to sync these with OneDrive, I think that you can just... Shut down the OneDrive client. Move your entire OneDrive folder to your 8TB HDD. (C:\Users\(username)\OneDrive) Don't change the folder name, it still needs to be "OneDrive". Start the OneDrive client. It will complain about the folder being missing. Tell it that you want to use a custom location and point it to where you moved that folder. Another way that you can move folders that Windows or applications expect to be in a certain location is: Shut down any programs that might be using the folder. Move the folder. Open an admin command prompt. Run the command: MKLINK /D "C:\path\to\old\location" "D:\path\to\new\location" This will set up a link so that the folder that has been moved can still be accessed using its old path, even if it is on a different drive. (Doing this for folders inside of a cloud sync'ed folder might not work out well, though, if the client doesn't handle symlinks properly. Moving the top-level folder should be OK.)
  6. Having the screen on adds about 8W, I am seeing 30-32W power use ("idle" but plenty of apps running) and occasional spikes up to around 45W. You are right, the lid is shut and I am using RDP which adds a virtual GPU & monitor.
  7. 22W, sometimes spiking up to around 35W. This is with hybrid graphics on but also I am accessing the system remotely so the screen is off. (I'll check again when I am sitting in front of the system later.) Also note down in the GPU section I can see GPU Power = 0W.
  8. When you see the system drawing too much power with Optimus on, disable and then re-enable the dGPU in Device Manager and see if the situation improves. Doing this action kicks the dGPU into properly powering off. (It will not fully power off if it is left in the "disabled" state.)
  9. I kind of see this in the opposite direction. It's not the decline of laptop GPUs so much as it is desktop GPUs finally growing up to take advantage of the desktop form factor. If desktops are an order of magnitude larger than laptops (talking about physical space / volume) then they should be able to dissipate an order of magnitude more heat. A decade ago, desktop CPUs and GPUs were not using more power than you could reasonably dissipate from a laptop chassis. Now, they are. NVIDIA is now building desktop GPUs that consume more than 400W and there's not really a way that you could dissipate that amount of heat from a laptop chassis (plus heat from the CPU as well) using current designs and materials. So yes, you're right, the difference between desktop and laptop GPU performance will only continue to widen as NVIDIA continues to crank up GPU power limits. It's more a matter of physics than it is NVIDIA failing in the laptop space. Not to give NVIDIA a pass... One could make the argument that putting a GA103 or AD103 GPU chip into a laptop is stupid. Here, I am assuming that recent rumors about an upcoming "GeForce 4090" laptop GPU with an AD103 core and 175W TGP are true, but NVIDIA is already selling "GeForce 3080 Ti" laptop GPUs with the GA103 core (...I have one right here). The power limit is going to be so low that the performance benefit to using one of those chips over GA104/AD104 at the same power level is going to be in the 2-5% range (as you can see by looking at the 3080 vs 3080 Ti performance numbers above), yet NVIDIA will charge hundreds of dollars more for the higher-end GPU. And of course, NVIDIA's propensity to name desktop and laptop GPUs the same is definitely misleading. Less aware consumers will think they're getting desktop 4090 performance out of their laptop 4090 GPU and ... obviously it won't even be close. I preferred it back when they just stuck an "M" on the end of all of their laptop GPUs to make it clear that they were different. But NVIDIA likes it this way because it makes their GPUs appear to be more competitive against the desktop variants and thus easier to sell, I presume. A more high-bandwidth eGPU connection option could help laptop users who want access to desktop GPU levels of performance, I guess...?
  10. Yes, that's what I'm referring to. Normally, messing with the INF file is required for "unsupported" GPU upgrades. I'd suggest that NVIDIA didn't actually turn Optimus support on for whatever system this card is being recognized as (since it is not "supposed" to be working) and it would still require an INF mod to "fix" it properly. Probably easiest to just disable Optimus altogether, if you don't actually use the system on battery!
  11. Yes, Optimus is primarily for power savings. It is fine to run with it disabled, it generally removes some hassle this way. Inability to get the NVIDIA GPU to engage in Optimus after a GPU upgrade is generally a symptom of the INF mod being done wrong. I ran into this myself when I did my first GPU upgrade (Quadro M5000M in Precision M6700). When you do the device ID replacement you have to make sure that you are replacing a configuration that supports Optimus. These Dell systems actually change the hardware ID depending on if Optimus is enabled or not so you have to pay attention to which hardware ID you are replacing to do the INF mod.
  12. You need dGPU enabled in device manager to get the power savings. Doing the disable/enable just kicks it into powering off properly when it is stuck on. Yes and yes.
  13. dGPU stays on when it shouldn’t. I’ve posted about it lots before. With hybrid graphics on, try just disabling and then enabling the dGPU in Device Manager and see if that fixes power draw and temps. I have this scripted to happen 2 minutes after I log in to fix Optimus. The dGPU might be on even if the NVIDIA status icon thing is showing gray/off. An easy way to check is with Dell Fan Management. If it is showing a temperature for the dGPU, it is on. If it is off, it will show “—“.
  14. Yeah, so I see "Dynamic Boost" showing enabled, but there should be a separate entry for "Dynamic Boost 2.0", which I am not seeing.
  15. Sam Bankman-Fried’s former friends pleaded guilty and are cooperating in the FTX fraud case https://www.theverge.com/2022/12/21/23521967/sam-bankman-fried-ftx-crypto-fraud-caroline-ellison-gary-wang
  16. Is your system showing that Dynamic Boost 2.0 is available? Mine is not. (It should be an entirely separate entry in the NVIDIA "System Information" panel... I've seen it in my Precision 7560.)
  17. Ha. Don't get your hopes up. I think Dell thinks that their primary market for these systems (larger businesses) want to see both the power brick and the system chassis becoming smaller, not larger. (They're probably right.)
  18. I think twice the E cores will be fine. Will it work with higher power limits (overall) or clock speeds than Alder Lake / 12th gen? No. Will it be faster, as in, get more work done in the same amount of time overall? For fully multi-threaded loads, likely yes. More cores running at lower power levels (per core) is generally more efficient in terms of performance-per-watt.
  19. Rumored specs showing AD103 as the top GPU for laptops next generation, and NVIDIA could indeed be naming it "GeForce 4090" despite the same 175W limit that we are seeing for this generation. (Top pro 5000-level GPU will likely match specs with whatever the top consumer mobile GPU is.) https://videocardz.com/newz/alleged-nvidia-geforce-rtx-40-laptop-gpu-clock-and-tgp-specs-emerge I'd say that this means that there is not much room for a mid-generation GPU upgrade in the 2024 systems. (I seriously doubt that NVIDIA will try to shove an AD102 GPU into laptops.)
  20. SBF has waived his right to formal extradition hearings and may be moved to the U.S. as soon as today. https://www.theblock.co/post/197107/bankman-fried-extradition-to-u-s-approved-wsj
  21. Thanks for the heads up. I updated the OP to reflect this. Coming back to this one... My fans are Delta, not Sunon. (I have not yet had the assembly replaced. Not sure if I will, since I have things working good for now.)
  22. Alder Lake HX also had a 157W upper limit. I think that the power efficiency at lower power levels is not bad, especially with the E cores in play... But, obviously more power = higher performance and competition is forcing Intel to raise power limits in order to stay on top of the performance charts. I suppose they figure you'll buy an Alder Lake "P" or "U" CPU if you want a system that gobbles less power .....? Heck, even the Alder Lake U CPUs have up to 10 cores / 12 threads and can turbo boost to 4.8 GHz. (Dell offers "U" CPUs in the Precision 3000 line.)
  23. AFAIK, no laptop manufacturers are putting MXM cards in current-generation laptops anymore. You can find "standard" Turing and Ampere MXM cards (T1000/T2000/A1000/A2000), but they are more built for small embedded systems and "happen to work" in some older laptops with MXM slots. (They are also hard to find and expensive when they do pop up.) Higher-end cards like A4500 exist but are even more difficult to come by. I don't recall seeing a GeForce MXM card since RTX 2080 and those were wildly out of spec in terms of both size/shape and power requirement. High-end GPUs have moved beyond the MXM spec (it was designed for around 100W TDP max, and it can be pushed a bit higher than that... but Ampere GPUs can pull up to around 175W). A modern standard has not emerged. Laptop manufacturers are happy to either solder GPUs directly onto the motherboard or use proprietary cards.
  24. It says "Raptor Lake" laptops will be available before the end of 2022 (Intel has stated this in the past) but it does not say that about the HX line specifically. I'm sticking with my earlier projection. We'll hear about lots of systems with Raptor Lake H at CES (coming up soon) that will probably be launching throughout first quarter 2023, but Raptor Lake HX will be a few months further out still.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use