Jump to content
NotebookTalk

All Activity

This stream auto-updates

  1. Today
  2. The author of the Reddit post mentioned above added full SPECworkstation 4.0 results to his post: Seems like that the ThinkPad P16 Gen performs better than expected, however, also worse than the direct competitors when you choose an RTX Pro 4000 or 5000.
  3. I believe that there is an online review comparing PMP18 to PMP16 and the HP ZBook G1i Fury 18, the PMP16 does not lose a lot compared to the 18" variant, its within single % digits. The HP was noticeably weaker in most tasks.
  4. The Dell Pro Max 18 Plus is clearly in a higher category. It would be interesting to compare the ThinkPad P16 Gen3 with the Dell Pro Max 16 Plus instead.
  5. Hello guys, I am enclosing the comparison between the Dell Pro Max Plus 18 and the ThinkPad P16 Gen3 with the same configuration. You may as well ignore the PL/TDP data, as on the PMP18 it means theoretical maximums of the platform, whereas on the TP P16 G3 it is couting on the platform power / thermal limitations already. Both systems were tested with clean OS and stock power settings, same conditions 1:1, it was just a misunderstanding. TL;DR: With a 55% larger platform power budget, the Dell has about 25% more power during the sustained CPU load, during the combined load of both CPU and GPU (such as gaming) the Dell has about 32% more power. It has also significantly lower noise output, but the CPU temperature does rise up to the 105°C and can spike up to 109°C on individual cores. On the ThinkPad, the temperatures peak at around 98°C due to the more aggresive thermal policy. Measured stock memory latency in this configuration at 108ns (PMP18) vs 151ns (P16G3)
  6. ebourg

    Thinkpad P16 Gen 3

    The P16 Gen 3 and P1 Gen 8 share the same Tandem OLED display which has been reviewed here: https://youtu.be/GqyOD0JpOzY?si=jNfy2FFK32Vfs_M8&t=800 The PWM frequency is 1200Hz so it shouldn't be noticeable. The IPS display on the other hand doesn't use PWM.
  7. The vapor chamber for the P870xx series is the best air cooling solution!:-)
  8. It was a choice between "all I want for christmas is you" or this......sorry Mariah I can't stop laughing
  9. Anyone tried A3000 from x-vision in the Viking? I have the PNY myself but thinking about putting that in the m17 instead…
  10. @Drimacushello, where did you get that heatsink? I had to modify my own laptop casing just to cool the RTX 4080
  11. Yesterday
  12. News, so the i9 9900KF with the bios I uploaded works fine on dual channel. So yes, it was indeed a pressure issue from the lga bracket.
  13. indeed they are, it was just a brain fart on my part. i wasn't following the cpu scene closely since Intel made that change i sort of glossed over that bit.
  14. You hate cold and I hate heat. My body has adjusted to the hot weather here, but too hot is a health risk that is equally hazardous as freezing to death. Extreme heat and extreme cold are both an issue. Temperature aside, I love living in the desert because I do not like precipitation in any form. I hate rain and I hate snow. I like it to be very dry, clear skies and low/no humidity whether it is hot or cold. I find it easier to get warm if I am too cold. I can bundle up and add more blankets to the bed. If I am too hot it is harder to fix that problem. You can be wet and naked and still be too hot, LOL. Both can kill you.
  15. This ain't AI overwiev. This is real. The winter is here to stay for next +6 months. And I hate it even more than last year. Already come above one meter of this white beauty trash the last few days. Some love it, but it won't be me. Wish this was a fake snowy picture from my phone. But nope, it's equal real as the melted 12VHPWR connectors that pop up in pictures. Taken from the window in the living room. In God we have trust, LOL
  16. Won't be the last one when the AI bubble burst and ruin your retirement life. Hot climate is easier to survive as homeless than here home in the cold North. You wouldn't survive 3 day here home when the snow storms and cold hit you in your face. The winter is here bro @Mr. FoxAnd I hate it even more than last year. Already come one meter of this white beauty trash.
  17. I delid my own with the der8auer tool. Works fantastic. You just have to spend about 30 minutes moving it back and forth with the tool until it is loose. Tedious but foolproof as long as you keep going until it falls off and don't try to lift the IHS before it is loose. Usually in the range of 500-1000 Cinebench R23 points lower with the Apex. AMD CPUs are inconsistent between Cinebench runs and can have a wide variance between runs with the same CPU. There are too many artificial algorithms in play with Ryzen. But, I can't get within 500 to 1000 points of my best scores with the Strix or AORUS Master using the Apex with the same CPU installed. It almost has to be something in the firmware. It would not seem logical for something with the motherboard hardware to affect it. Also worth mentioning that it is with the same Windows installation. To rule out an OS issue I create an image from one, restore it using Macrium Reflect on the other and reactivate. So all of the same software, tweaks, drivers and OS tuning on both machines. This makes no sense to me. There is probably something I need to do different that I don't know about. Maybe an obscure BIOS setting I need to turn on or off on the Apex. This should be an almost entirely mathematical outcome based on core count, IPC and clock speed as long as something is not interfering with the Cinebench run. This EPYC CPU seems like a really good sample and I am pretty sure I am going to keep it. It seems better than average looking at the V/F curve and voltage requirements in practice. Getting it to function like an X3D in a scenario where it should has not worked for me yet. When I try to configure it to use the 3DvCache CCD it is defaulting to the non v-cache CCD no matter what I have tried so far. Configuring it to function like an ordinary CPU it seems fine, so I suspect this is a firmware configuration problem, an OS/game/chipset driver bug issue, or even an ignorant noob issue on my part (relating to firmware configuration). I did a clean OS install with the latest chipset drivers and set everything that is supposed to be how it is done that I know of and it still parks the wrong CCD and activates the one without v-cache, LOL. I don't care about the v-cache necessarily, but if I can use it as leverage for 3DMark scores I certainly want to take advantage of it. I don't expect it to matter much at all with something like Cinebench, wPrime, Pifast, or Y-Cruncher.
  18. Xeons are history since Intel started with big / small CPUs so not sure if you are serious? 🙂 You should look at the Legion 9i for this gen - it has the keyboard, GPU and screen that you want and it is also a more powerful machine that won't drain the battery while you are gaming.
  19. Mobile Xeon is dead. They haven't had one since 11th gen / 2021. It really is just a change in branding though rather than a change in functionality. Intel offers select CPUs that support the old "Xeon" pro features (i.e. ECC memory), and it appears that Dell is offering those in these systems. ...Actually looks like the entire HX lineup supports these this time. When they originally dropped the Xeon branding for 12th gen, it was split, with some CPUs supporting it (i.e. 12950HX) and some not (i.e. 12900HX). 12950HX is basically a Xeon of that generation, and they still have CPUs in the same class in 13th gen and ... whatever this new batch is called, 2nd gen Core Ultra.
  20. That is very odd. How much lower? Margin of error lower or substantially lower? Do you delid your own AM5 CPUs or send them off? X3D is very much hit or miss and it really comes down to over saturating it and watching performance become uneven. It is a one trick pony aimed at gaming. Perfect example really is Fallout 76. Older engine with such small assets and it just destroys Intel even at 4k but take something like the modernized WoW engine or even Starfield at 4k and you can get those drops when the cache is being over saturated. Overabundance of player data in WoW can render X3D moot many times and heavy populated areas or raids with a ton of player FX popping off will tank just about any CPU in the face of a 5090 even at 4k. You will see GPU utilization all over the map from 99% down to the 50's depending on what's going on. tl;dr when it works, oh mama! when it doesn't, ugh.
  21. While I’m already on a roll… what happened to the Xeon option in the new lineup? And we’re still stuck with no real keyboard upgrade path on the flagship models. You basically need to buy a teenager’s Alienware if you want a proper keyboard or a non-ISV GPU out of the box. And yes, I’ll say it outright: I miss the old clicky keys, and everyone in the office can deal with it. If they offered a lever that advanced the paper like a typewriter, I’d probably add that too just for the experience. 😅
  22. Following up on my post a few weeks ago about burned 5090 connectors using Nvidia's official adapter..... I've been doing deep dives into finding any Nvidia 5090 Founders using the supplied Nvidia adapter that have burned and I haven't found anything substantial yet. It is always third party adapters / PSU cables that burn. Asking Google AI and it came up with the same results: Of course we know while using the card w/ the supplied cable isn't a guaranteed solution, it does make it easier to get the card repaired/replaced from the AIB since there are no third party cables involved if using the supplied AIB cable or same brand PSU. --------------------------- 9070xt prices are as good as they are close to getting. Even Sapphire's Nitro is now officially $100 below MSRP along with the Taichi 9070xt $50 below MSRP and the Taichi has black friday protection in place at NE for all Asrock 9070xt models which are already all priced below MSRP so you know you're going to get some coin back after BF. Plenty of $999 priced 5080s and $749 5070ti's with a few models below MSRP. Zotac 5080 has black friday protection even with its $999 pricing. Same goes for 5070's with several models below MSRP. I'm expecting some serious deals on black friday at this rate as there is a glut of GPUs and especially since it seems the Super series has been pushed way back with memory pricing uncertainties there's no need to wait now if you need a primary or even secondary GPU. I'm leaning towards picking up a Nitro+ 9070xt as a backup/secondary/SFF model especially as the recent batch of Taichi's are using Samsung memory.... If anyone is in the market for a card not named 5090, now is the time to buy up to and through black friday. Cheapest actually available 5090 is the $2800 Zotac AIO model in the US....... Bonus: Extended return windows are now in effect at da egg and amazon....
  23. I'm waiting for the fans to arrive to disassemble everything again, and bend the upper plate down, closer to the gpu and do do another cut to be able to close the case
  24. those "dimsion" were exactly what i had. i think they were running a bit faster than the 1600mhz, see my CPU-Z, thats why i thought it was some wrong config. now i can only find used ones, i might buy just 2 sticks to try to see the diference in games. @erikmiskolin : nice to know you got it running 130hz, maximum i tried on my 1920x1080 was 75hz, but surely it can do more, my GTX 980 does not worths the efort of screen, so i try to play at 60fps with good image quality. i also managed to get my heat sink on, couldn"t dissolder pins (my solder pen is also fake from china)just removed the plastic and used super glue to attach the cables to the pins(no 12v, just 3v, 5v)and white termal glue EC360 to prevent shorts.GPU stays a bit lifted but straight. i always watched "GPU Hot spot point" temp before, never puted to mutch atention on core temperature. my fans didnt arrived yet, i am using a fan pad, but now, "Hot spot temp" stable at 90º or less, before i had-it sometimes at 95º and fans runnig fast and loud, core temp mutch more stable at 75º.new termal paste "hydronaut" but not well spreaded, it was just to testing, but i feel i can keep testing 😋 i can have all the fans at half of the speed than before, and now i can overclock GPU more, i am using core clock +81 and momory +190, no shutdowns, im playing witcher 3 at medium graphics 30/40 fps and destiny 2 at medium, 60 fps, can go to 40 fps in dense areas. 550 points on 3DMark demo, can do more with the overclock, but not stable for games. (record is 727 with same hardware, how?) i never see CPU reach 55w in games and 3.9 ghz, just 3.7ghz 3.8 some threads, coud it be cause of the cheap PSU 230w from china? i never letted GPU throttle, witch sensor makes it throttle at 101º? Hot Spot temp or core temp? Sory for my English
  25. A bit of context on ISV GPUs, ECC, and why these distinctions made sense in the past but hold very little value today—especially in mobile systems. Historically, ISV-class GPUs (Quadro/RTX Pro) ran ECC VRAM enabled by default, and you couldn’t override it. Their drivers were built for long-duration, mission-critical workloads, so clocks were intentionally conservative. This made sense for environments where any calculation error could have financial or legal consequences—think Wall Street, CAD/CAM shops, or regulated verticals. Back in the early 2000s, Precision mobile workstations were literally the only laptops offering system-level ECC memory, and ISV certification actually mattered. The software ecosystem was a mess—standards were looser, OpenGL implementations varied wildly, and ISV drivers guaranteed predictable behavior across entire application suites. That was the right tool for that era. Today? The landscape is completely different: Modern applications and frameworks self-validate, self-correct, and handle error states internally. DDR5/DDR7 include on-die ECC and advanced signal-integrity correction long before data ever reaches system memory. Driver ecosystems are mature and unified; the old instability that justified ISV certification is largely gone. Even system ECC memory is increasingly redundant for most mobile workflows. The big reality check: In mobile platforms, the theoretical advantage of ISV GPUs—sustained stable clocks—simply cannot manifest. Modern mobile thermals hit the ceiling long before ISV tuning makes a difference. Both Pro and non-Pro GPUs will throttle the same once the chassis saturates. That endurance advantage only shows up on full desktop cards with massive cooling budgets. That leaves one actual, modern-day benefit: Legacy OpenGL pipelines. Outside of that niche, ISV certification brings almost nothing to the table—desktop or mobile. Bottom line: ISV certification made sense 15–20 years ago. Today, especially in mobile workstations, it’s a legacy checkbox with minimal practical value. Non-Pro GPUs offer the same real-world performance, and in many cases, better flexibility for modern workflows.
  26. Is there something that is called budget gaming nowadays? AMD, walk their own way... Maybe they even beat Nvidia increasing prices first? Better than letting Nvidia decide the price of Radeon cards?🤔 AMD rumored to raise GPU prices just as Radeon RX 9070 XT finally reaches MSRP AMD & NVIDIA Could Kill Off Budget GPUs as Memory Shortages Drive Costs Up, Leaving Entry-Level Gamers With Little Options And used will also cost you more... US has now become Norway. Tax hell. Everything I buy from abroad is taxed very hard. Used or new doesn't matter. Vintage PC parts are getting hit with huge tariffs, even when they're worth almost nothingA $355 box of retro Apple boards ended up with a $684 tariff
  1. Load more activity
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use