Jump to content
NotebookTalk

Aaron44126

Moderator
  • Posts

    2,239
  • Joined

  • Days Won

    31

Everything posted by Aaron44126

  1. I feel like this is an over-the-top solution that won't help out much in the end. The laptop's cooling system is more about getting heat from the CPU/GPU out of the system as quickly as possible. The CPU is going to run at 100°C under full load if the power limits are high enough, and lowering the temperature of the air going in by 10°C or so may help a little bit ... but I have a hard time believing that it would be worth the complexity of setting this thing up plus the power that it would use and noise that it would make just to nudge the CPU speeds up by a few hundred MHz at best. (Also, would it really be that portable? Just use a desktop with a bigger cooler...) Plus, for the default configuration, the power limit is more of a performance limiter than the thermal limit is. (Especially on the GPU side.) Reducing the input air temperature won't help at all there.
  2. This is crazy. That error code is indeed for a RAM issue..... Was your new motherboard tested with the original RAM or the Ripjaws first? Wondering if perhaps the same issue was created a second time, somehow it just doesn't like the Ripjaws and they cause some kind of permanent failure. In general, I recommend "mainstream" memory for these systems (Samsung, Kingston, Crucial, etc.) because it would not be the first time that "enthusiast" memory (Corsair, G.Skill, etc.) has caused a problem, though I have never heard of something like this where there was a "permanent" issue. .....If you first tried the new motherboard with the original RAM and it was still broken, then clearly there is some other problem in the system. Odd because replacing the motherboard actually replaces most of the components that could have gone bad. ...Maybe the GPU? I am at a loss. In any case, since it is new and under warranty, if Dell can't get it fixed promptly then just tell them that you want a full system replacement.
  3. Yeah, I was going to say this as well... If they do produce an 18" system, it will likely be a similar situation to them moving from 15.6" to 16" — they just made the display 16:10 and kept the width (roughly) the same, so the laptop's footprint is (roughly) the same, but they can market it with a larger display. There is certainly room in the Precision 7770 to make the display "taller". I saw the rumblings about a possible 18" system but I think that it is likely that Alienware will go first, and next year's Precision "7X80" systems will have the same chassis that this year's had. (They do consistently tend to use the same chassis for at least two generations.) Even though Alienware didn't get an Alder Lake HX system, it is looking like they may well get a Raptor Lake HX system (I actually ran across an Alienware guy on LinkedIn who stated he was involved in the project for such a thing), so next year's Alienware lineup might be more interesting. CPU-Z actually shows a blank spot / no value for the GeForce TDP on my system. Maybe it can't figure it out because I have graphics switching enabled ...?
  4. I remember this as well, I've been into computing since the DOS days when monitors could show black and orange, and you had to manually configure how many heads and cylinders your hard drive had in the BIOS settings, and I used some early laptops that had a LCD screen transition delay so bad that you could see a "ghost image" of the mouse pointer for a second or so after it was moved. 🙂 And I remember using the Windows 95 with multiple apps open and Internet and everything, but only 16 MB of RAM, and somehow that was workable... boggles the mind now. Like I said... I've been in this space for a while and I know how they operate. I'm talking about the Dell Precision mobile workstation space in particular. Even though I don't get to personally use a new model every year, I've been active on this site (or NBR) around all Precision product launches going back to 2012 (and I was "lurking" even before then). Last year with Precision 7X60 it was exactly the same thing, except for the power limits were slightly lower. A dGPU chip which NVIDIA had spec'ed to work at 165W was limited to 110-115W in the Precision 7760, despite NVIDIA control panel showing 140W max TGP. (Limits in Precision 7560 were even lower.) Why would this year's models be different? I would not hold out hope that things are going to change for next year's models either, nor the year after... The big difference between now and the "old days" is the ever-increasing power ceiling on chips. A decade ago you could expect CPUs and GPUs to operate at their maximum speeds and power levels even in a laptop as long as cooling system was sufficient; that's not really the case anymore. Different manufacturers have approached this challenge in different ways with different models (comes back to "everything is a compromise"), and Dell is more conservative with the power limits in Precision than they are in Alienware.
  5. Everyone has their own definition, but to me a "gaming" laptop would be a high-spec laptop that doesn't quite hit workstation-class. You could certainly use a "gaming" laptop for work just like you could a "workstation", but most businesses wouldn't buy one, so gamers are the main audience. Workstation-class would entail business-class support (mainly, prompt attention and parts replacements without having to mail the laptop in), a durable build (using more metal than plastic for the chassis), and long-term support (BIOS and driver updates for several years). MSI and ASUS haven't hit this threshold. Yeah, I think that the main sticking point here is expectations. I fully expected Dell to have a pretty "low" limit on the dGPU when I bought this thing. (Nowhere are they advertising a particular power capability for either the CPU or dGPU.) I've been around this space for a while and I know how they operate. I purchased it with that in mind and was not the least bit surprised to find out that my expectations were correct. I'm generally happy just that the power limits have been raised a bit from the Precision 7760. I feel that the trajectory that Dell is going with the Precision 7000 line means that they will be trying to shrink the chassis by a few mm's whenever they refresh it, and they will not be increasing the power supply above 240W (unless there is some big advance in cooling design/materials), the perception being that their main audience (large businesses) does not want "bigger" systems. They didn't design this system to go toe-to-toe with Alienware. The latest 17" Alienware uses more space on the cooling system with four fans, which means higher power limits and compute potential but less room for other things (NVMe drive slots for example). I'd also argue that the Precision 7X70 systems largely working "as designed", as the rep even said. It simply wasn't designed to the level of capability that you were hoping for. I might wish that they made some different design choices, for sure, but there isn't much any single individual or small group can do to change how they approach their product design. (I also wish that the industry as a whole would be more transparent when it comes to expected performance / power levels when the same chips are offered in different products.) There have been some missteps (AC/DC loadline, poor performance from some of the early coolers) which Dell worked to fix, but they aren't obligated to design to max out their dGPU and CPU power limits to the highest levels "allowed" by NVIDIA and Intel. As for the others (HP, Lenovo, ASUS, etc.), as has been discussed in recent posts, each of their systems has its own compromises. Everything is a trade-off (ever constant in computing). The other workstation makers (HP/Lenovo) have apparently decided that their main audience (large businesses) does not want "bigger" systems to the point that they dropped their 17" offering altogether. Anyway, everyone has got to look at the options and pick what matters to you the most — this forum may be unique in that there are threads going around discussing all of these things with each of the options on offer. For myself, I've put business-class support, 17" display, and max storage at a higher priority than max CPU and dGPU performance. ...Otherwise, I would have bought a "gaming" laptop.
  6. You are right, Alder Lake HX just has the iGPU with 32 EUs. If you need more power from the iGPU, you should get a system with an Alder Lake "H" CPU and not "HX". (Dell has no way of offering the system with "Iris" graphics; there are no Alder Lake HX CPUs that offer this. I think they might have written "Iris" in some marketing material because it is the same Xe graphics architecture used in other Alder Lake CPUs... it just has fewer EUs. But, that is misleading for sure.) I've used this and previous systems with three 4K displays connected to the iGPU (one built-in + two external) and performance is fine for desktop applications and high-resolution video playback. I do always get my work dev systems (like my Precision 7560) with a low-end NVIDIA GPU though, just in case some better graphics performance is needed for something. Nowadays, even the bottom-end NVIDIA dGPUs have tensor cores which can be used for AI applications as well. As for CPU performance, I also work in the software development field and again I have found performance to be "fine". It's better than Precision 7X60 for sure, even if there are other systems that could push the CPU a bit harder. Alder Lake's hybrid architecture is a bit of an adjustment, I'm still running Windows 10 and I am using Process Lasso to make sure that certain processes end up on certain types of cores as I desire.
  7. Huh? I'm not sure that I understand this comment. XPS 17 9720 has a puny-by-comparison 130W PSU. With a GeForce RTX 3060, it scores around 6600 in Time Spy. Precision 7770 certainly tramples it even without any tuning. Anyway, I knew full well going in that both the CPU and GPU are recycled/underpowered desktop components that would perform accordingly. Dell doesn't turn BIOS updates around in 3 days. I've been through beta testing BIOS updates with them before. If anything I suspect that this went through a few weeks of investigation/planning/implementation/testing before it was included with the BIOS update posted on the support site. (Referring to the AC/DC loadline fix here.)
  8. Maybe disappointing, but this is not new or unexpected. Last year's systems were reporting 140W max TGP but effectively capped lower as well (I forget what exactly, 110-115W in Precision 7760?). @Dell-Mano_G doesn't manage the support reps so I doubt he will have much input as to why certain support experiences played out the way that they did. I suspect he knew that there would be those of us disappointed with the power levels which may be part of the reason that they were not divulged ahead of launch (as they have been in the past), and even at launch we only got CPU+GPU combo limits and not the individual limits. I haven't seen Dell advertising 150W GPU power limit anywhere. In any case, it's unusual for Dell to make any changes or feature additions to a product after launch, so I figured that getting them to unlock the TGP or change the fan control behavior was a long shot at best. (...They will fix things, though, as we have seen with the IA AC/DC loadline fix for example — that might have even come as a direct result of the posts in this forum.) The sad state of affairs is, every system out there right now for this generation is compromised in some way, so you have to do some research and figure out which one has the downsides that bother you least. There are some 17" gaming-focused systems that might offer higher total raw performance. I'm only aware of three Alder Lake laptops that offer 4× NVMe slots. And if you want business-class support, you're really just looking at Lenovo, HP, and Dell. Of those, only HP and Dell are offering 4× NVMe (important to me right now, maybe less so if 16TB drives become available)... I haven't looked at HP too close but I suspect their GPU power level is even more gimped than the Precision 7770's (that's normally the case for them). HP does have a BIOS option to keep the fans from powering off, but that forces them to run at around ≈2400 RPM at the lowest level (so I hear), too noisy for idle workloads. And anyway, only Dell is offering 17", so there's really no other choice. I don't run super performance-heavy stuff, just moderate gaming and occasionally video encoding, and I've been OK with what I'm getting out of this system even if the benchmarks don't top the charts. (I do have a bulk data archiving project going on which is why I need all these drives!)
  9. Eh, sounds like even in Russia people have been laughing at this guy for years.
  10. Like I posted above in the thread somewhere. I have watched interviews and I used to have a high opinion of this guy. His recent actions speak for themselves, though.
  11. If you turn off the schedule and just turn night light on manually from the control center, will it stay on?
  12. If you have it set up to sync these with OneDrive, I think that you can just... Shut down the OneDrive client. Move your entire OneDrive folder to your 8TB HDD. (C:\Users\(username)\OneDrive) Don't change the folder name, it still needs to be "OneDrive". Start the OneDrive client. It will complain about the folder being missing. Tell it that you want to use a custom location and point it to where you moved that folder. Another way that you can move folders that Windows or applications expect to be in a certain location is: Shut down any programs that might be using the folder. Move the folder. Open an admin command prompt. Run the command: MKLINK /D "C:\path\to\old\location" "D:\path\to\new\location" This will set up a link so that the folder that has been moved can still be accessed using its old path, even if it is on a different drive. (Doing this for folders inside of a cloud sync'ed folder might not work out well, though, if the client doesn't handle symlinks properly. Moving the top-level folder should be OK.)
  13. Having the screen on adds about 8W, I am seeing 30-32W power use ("idle" but plenty of apps running) and occasional spikes up to around 45W. You are right, the lid is shut and I am using RDP which adds a virtual GPU & monitor.
  14. 22W, sometimes spiking up to around 35W. This is with hybrid graphics on but also I am accessing the system remotely so the screen is off. (I'll check again when I am sitting in front of the system later.) Also note down in the GPU section I can see GPU Power = 0W.
  15. When you see the system drawing too much power with Optimus on, disable and then re-enable the dGPU in Device Manager and see if the situation improves. Doing this action kicks the dGPU into properly powering off. (It will not fully power off if it is left in the "disabled" state.)
  16. I kind of see this in the opposite direction. It's not the decline of laptop GPUs so much as it is desktop GPUs finally growing up to take advantage of the desktop form factor. If desktops are an order of magnitude larger than laptops (talking about physical space / volume) then they should be able to dissipate an order of magnitude more heat. A decade ago, desktop CPUs and GPUs were not using more power than you could reasonably dissipate from a laptop chassis. Now, they are. NVIDIA is now building desktop GPUs that consume more than 400W and there's not really a way that you could dissipate that amount of heat from a laptop chassis (plus heat from the CPU as well) using current designs and materials. So yes, you're right, the difference between desktop and laptop GPU performance will only continue to widen as NVIDIA continues to crank up GPU power limits. It's more a matter of physics than it is NVIDIA failing in the laptop space. Not to give NVIDIA a pass... One could make the argument that putting a GA103 or AD103 GPU chip into a laptop is stupid. Here, I am assuming that recent rumors about an upcoming "GeForce 4090" laptop GPU with an AD103 core and 175W TGP are true, but NVIDIA is already selling "GeForce 3080 Ti" laptop GPUs with the GA103 core (...I have one right here). The power limit is going to be so low that the performance benefit to using one of those chips over GA104/AD104 at the same power level is going to be in the 2-5% range (as you can see by looking at the 3080 vs 3080 Ti performance numbers above), yet NVIDIA will charge hundreds of dollars more for the higher-end GPU. And of course, NVIDIA's propensity to name desktop and laptop GPUs the same is definitely misleading. Less aware consumers will think they're getting desktop 4090 performance out of their laptop 4090 GPU and ... obviously it won't even be close. I preferred it back when they just stuck an "M" on the end of all of their laptop GPUs to make it clear that they were different. But NVIDIA likes it this way because it makes their GPUs appear to be more competitive against the desktop variants and thus easier to sell, I presume. A more high-bandwidth eGPU connection option could help laptop users who want access to desktop GPU levels of performance, I guess...?
  17. Yes, that's what I'm referring to. Normally, messing with the INF file is required for "unsupported" GPU upgrades. I'd suggest that NVIDIA didn't actually turn Optimus support on for whatever system this card is being recognized as (since it is not "supposed" to be working) and it would still require an INF mod to "fix" it properly. Probably easiest to just disable Optimus altogether, if you don't actually use the system on battery!
  18. Yes, Optimus is primarily for power savings. It is fine to run with it disabled, it generally removes some hassle this way. Inability to get the NVIDIA GPU to engage in Optimus after a GPU upgrade is generally a symptom of the INF mod being done wrong. I ran into this myself when I did my first GPU upgrade (Quadro M5000M in Precision M6700). When you do the device ID replacement you have to make sure that you are replacing a configuration that supports Optimus. These Dell systems actually change the hardware ID depending on if Optimus is enabled or not so you have to pay attention to which hardware ID you are replacing to do the INF mod.
  19. You need dGPU enabled in device manager to get the power savings. Doing the disable/enable just kicks it into powering off properly when it is stuck on. Yes and yes.
  20. dGPU stays on when it shouldn’t. I’ve posted about it lots before. With hybrid graphics on, try just disabling and then enabling the dGPU in Device Manager and see if that fixes power draw and temps. I have this scripted to happen 2 minutes after I log in to fix Optimus. The dGPU might be on even if the NVIDIA status icon thing is showing gray/off. An easy way to check is with Dell Fan Management. If it is showing a temperature for the dGPU, it is on. If it is off, it will show “—“.
  21. Yeah, so I see "Dynamic Boost" showing enabled, but there should be a separate entry for "Dynamic Boost 2.0", which I am not seeing.
  22. Sam Bankman-Fried’s former friends pleaded guilty and are cooperating in the FTX fraud case https://www.theverge.com/2022/12/21/23521967/sam-bankman-fried-ftx-crypto-fraud-caroline-ellison-gary-wang
  23. Is your system showing that Dynamic Boost 2.0 is available? Mine is not. (It should be an entirely separate entry in the NVIDIA "System Information" panel... I've seen it in my Precision 7560.)
  24. Ha. Don't get your hopes up. I think Dell thinks that their primary market for these systems (larger businesses) want to see both the power brick and the system chassis becoming smaller, not larger. (They're probably right.)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use