Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,876
  • Joined

  • Days Won

    13

Everything posted by Etern4l

  1. Sounds like a sticky situation, sorry to hear. I will just drop these here, I have found this guy's videos to be fairly sensible and helpful:
  2. Better them? That will be it. Once superior military robotics exists, a government can just mass produce this to control the population, conquer any non-nuclear countries etc. Once things get out of hand, the shiny army will be ready to enforce the new world order. Not a future any sane and decent person would want.
  3. I couldn't actually find any credible info on that, but absolutely, they are All-In on AI, which is most likely why they are not even trying to be nice to lower tiers of gamers. They understand that AI will soon dramatically exacerbate income inequality, hence the lower market segments are dead folks walking. Either you are one of the haves, in which case you are welcome to a nice 4090, otherwise it's a waste of time and ideally you would just go grab some used hardware off eBay or play on an iGPU (or frankly die rather than cause some unpleasant social unrest down the line), and stop wasting the Masters' of The Universe time. To be fair, the PC market is tanking, so it must be a depressing area to operate in, as opposed to building gods.
  4. Yes, but how do you propose to address this particular problem within the unfettered capitalist framework? Please don't give me the rational agents lol, we know what that looks like in practice. Seems to me like the only realistic solution to this involves heavy handed intervention by the FTC, and similar fair market and consumer protection agencies around the world.
  5. Not familiar with the TLP package, but if power management is the goal, then I guess (but I'm not sure) interface priorities alone probably won't do the trick. That said, perhaps there is some advanced config trick to automatically turn lower priority interfaces off if higher priority ones are available, even without using another package. Edit: yes, perhaps. It seems like bonding set up in active-backup mode should do this. If the active/higher priority interface is available, the backup one is apparently turned off. I cannot test this now, but will at the next opportunity.
  6. That's really cool! I'll ask this, maybe this is something you guys have experience with. Is there a good way to get it to disconnect from Wi-Fi if there is an Ethernet connection? (Windows does this automatically.) Network performance between my Windows VM and my network scanner (talking about like a document/photo scanner) is poop if Wi-Fi is connected. It doesn't seem to know to prioritize the Ethernet connection. I can script a solution to this as well, just wondering if there was an easy way to do it. I've set negative interface priority on the wifi interface, although not sure if that does the trick. The following seems to contain some good suggestions, in particular the TLP package (advanced power management for linux) looks like it could be a direct hit. Don't have it installed so can't confirm. https://askubuntu.com/questions/112968/automatically-disable-wifi-wireless-when-wired Yeah, just noticed this myself. -999 for the Ethernet, makes no sense, unless it has something to do with bonding I have set up. The slave bonding interfaces have zero priority for eth, and -1 for Wifi.
  7. Cool, I'm sure you will be amazed by the results. Be careful with the application, in that it's easy to rip up the "pad", and bear in mind there might be some short curing time (a day or two). Kryonaut is really only good for desktops if at all, Conductonaut is LM, Carbonaut and the other carbon pads would be useless in this case.
  8. That's the equivalent of what socialists say: wonderful idea, but got corrupted etc. The reality is that there are very few democratic capitalist countries in the world where most people are happy. I'm struggling for examples. Germany? I'm not sure. Switzerland? Singapore maybe, but it's not really clear-cut democracy. It's no doubt the fault of large corporations, however, pure unregulated capitalism is where they really thrive. In China large companies are under effective state control, so whatever they do is really on the government.
  9. I watched an interview with the CEO of Boston Dynamics. In the non-physical space, we are already living in the age of bots. On the physical side, full-on humanoid robots are not an immediate prospect. I think the push there will continue to be on the industrial, commercial, driving and military fronts first. @ryanImagine one of those things armed, and rest assured they will be. DoD contracts are one of the major sources of funding for Boston Dynamics, and if not them, someone on the other side will do it. Physically, Atlas is already in the ballpark of the metal terminators... it's just still too dumb.
  10. Yes. Could be, or could depend on the OS, driver version etc. The clocks and performance are as expected, but... P2. Still, would be consistent with your P2->P3 falloff observation.
  11. Seeing P2 under load. Great question about the power adapter, could explain the power limit. That said, downclocking (and necessarily undervolting) the CPU should help in that case as well as it would free up the power for the starved GPU. On the other hand how much more power can that GPU take, if it's running at 74C under 50-60W load. I would just repaste both the CPU and GPU immediately (well, as soon as 7950/7958 arrived from ebuy7).
  12. Ha, you're right. Performance State The current performance state for the GPU. States range from P0 (maxi- mum performance) to P12 (minimum performance). This is not the same as the "PowerMizer Performance Level" I tend to look at in the Linux driver.... Makes sense then - the GPU was throttling a little in P3.
  13. That's the normal/desirable state under heavy load. It's the opposite of Intel C-states: P0 is the lowest, P4 the highest (and I have yet to see P4 in practice, it's probably the equivalent of Intel's short-term turbo boost, with duration on the order of milliseconds).
  14. @Reciever Hope you are OK bro. The Dallas shooting news is unsettling, so many of those coming from over the pond these days.
  15. That's how capitalism works. The only thing that matters is the fairly short-term profit, and Nvidia's strategy, coupled with their effective monopoly, will maximise it. We should celebrate this I guess, particularly given that we have been unable to come up with and implement a better system :) That said, it is quite ironic that the advanced AI Ngreedia is working so hard to deliver to the world is super-unlikely to have any real care for our money.
  16. I think DLSS in general is a bit different, in that it basically saves compute per frame at a potential risk of artifacts (otherwise there would be no point using it), so if you are a casual gamer with lighter hardware, there is probably little reason not to turn it on if available. You are right though: with DLSS3 people like FPS gamers might notice some input lag, since the generated frame is fake and doesn't correspond to any new input. Anyway, we'll see what the impact is on a real game engine if and when this tech actually makes it to games, the research so far is kind of preliminary.
  17. This just in: Nvidia are in the business of coming up with software which gobbles up GPU resources, and then selling GPUs to boot, ideally cutting expensive hardware features such as VRAM to maximise the profit :) In other words: the hardware keeps getting faster, but the computational requirements grow as well, and in this case up to 4x (roughly, not accounting for the fact that "traditional" texture decompression is handled by dedicated hardware, basically performed transparently, if I understand correctly). Again, whether this solution makes any sense depends on the balance of available VRAM, memory bandwidth, and compute capacity. If they planning on selling GPUs with constrained VRAM size and bandwidth, but sporting very fast cores, then this makes sense. For a 4090 or 3090Ti with 1TB/s memory bandwidth and 24GB of VRAM, this is more or less nonsense. Cards with insufficient VRAM, memory bandwidth and modest compute capability are out of luck of course. Good news is that it probably be will be a while before this makes it to games (requires explicit support), so there will be plenty of time for any necessary hardware upgrades.
  18. I just looked through the paper, it's there: Table 4. Decompression performance for a 4k material texture set (Paving Stones). BC High NTC 0.2 NTC 0.5 NTC 1.0 NTC 2.25 0.49 ms 1.15 ms 1.46 ms 1.33 ms 1.92 ms This is just a simple performance test rendering a single shape at 4K: 6.5.2 Decompression. We evaluate real-time performance of our method by rendering a full-screen quad at 3840 × 2160 resolution textured with the Paving Stone set, which has 8 4k channels: diffuse albedo, normals, roughness, and ambient occlusion. The quad is lit by a directional light and shaded using a physically-based BRDF model [10] based on the Trowbridge–Reitz (GGX) microfacet distribution [76]. Results in Table 4 indicate that rendering with NTC via stochastic filtering (see Section 5.3) costs between 1.15 ms and 1.92 ms on a NVIDIA RTX 4090, while the cost decreases to 0.49 ms with traditional trilinear filtered BC7 textures. They call the performance "similar", but in fact the latency it's up to 4x higher and takes up CUDA/RT cores, elsewhere in the paper they state it "might be possible to partially hide this overhead", given a fast enough GPU... When ascertaining if this will be of benefit in a particular game for you, look at HWInfo while gaming. You can see both memory controller and GPU cores utilisation. My bet is that usually you will be bottlenecked on the GPU anyway, so yes - with this you may be able to see higher resolution textures, but at some cost to the FPS, which presumably doesn't look brilliant on a 6GB 3060 mobile to be begin with.
  19. The catch is that is that this will put additional load on on the GPU cores, so will compete for the compute capacity with other gaming engine functions. This also incurs 2.5-4x higher latency compared to the very fast dedicated texture decompression hardware (on a 4090), so a fast GPU will be required. Bye bye Turing and Ampere, upgrade to 4000/5000-class GPUs required. Brilliant!
  20. Cool, good luck. I just run fixed ratios as well since I don't care about single core performance. As long as CPU temps stay under 90C and the CPU is holding the clocks (it probably won't if the temps go too far into the 90s), you should be good. Failing that you can try also down-clocking or power limiting the GPU if possible so it stays under 70C as well. Same story here, you want stable clocks under load. The difficulty, of course, is that in the laptop the two are interconnected. I think any loss from down clocking/power limiting will be smaller than the very noticeable/aggravating performance losses and dips caused by thermal throttling. Again, with all due respect to Dell pastes (not much of it left TBH), you can probably get significantly more performance out of your laptop with a repaste, especially if the thermal throttling hypothesis checks out.
  21. Turning turbo off would be unnecessarily harsh. I'd hope ThrottleStop or XTU are functioning properly, so you can just set a -1 or -2 core ratios. This works because of the hugely non-linear marginal power cost of those final few core ratio points. There are somewhat similar tools on Linux, which I haven't had a chance to use because of desktop BIOS. For a concrete example, I'm running my 13900K at 53/43 ratios resulting in temps under 80C under full load (mid eighties under CB23, where it hits over 40K). Were I to let it run at the stock 55/43, temps would hit 90-100C and performance would actually decrease due to the resulting throttling behaviour. HTH Edit: one other thing worth trying would be a repaste with the Honeywell phase-change paste/pad. No risk unlike with LM, and probably similar results.
  22. That's weird, but certainly possible. Either way, I'd bet a good amount it's just a form thermal throttling according to hidden parameters set by Dell. I updated my earlier answer with a suggested solution. Edit: Yes, a less aggressive performance mode might also help, but in my experience with Alienwares at least, the effect is minor, if so - that might not be enough.
  23. That's a deep dive for sure. Glad you surely must have enjoyed it, or else you would have quit by now.
  24. Sorry to hear. At a risk of stating the very obvious, HR departments are there not to protect Humans but to protect the Business. If they got involved, it means that either the Human Resource posed a risk to the Business, or the Business determined they want to get rid of the person for any reason, and HR was tasked with figuring out a legally plausible cause for termination (that's probably less of an issue in the US with employment at will) . Dealing with those people is rarely pleasant, apart from perhaps in the hiring process when the company wants to hire you a lot and send easy on the eye HR people in to court the candidate (in which case they might want to ask themselves "what's the catch?").
  25. Referring to the GPU-Z screenshot you posted earlier, the CPU was at 100C and GPU at 74C. That's pretty toasty, I'm sure Dell starts throttling things down at that point, and probably fair enough. Clearly, they (Dell's BIOS+VBIOS would be the precise culprits I imagine) prioritise the CPU, so just harshly downclock the GPU. I'm not sure why you thought it was power limited at 53W. Obviously you must have undervolted as far as possible already. What I would try in your shoes is actually downclock the CPU (and perhaps the GPU) to keep the perfomance at a slightly lower, but steady level. Setting lower temp/power targets might also be an option, doesn't work as well in my experience.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use