Jump to content
NotebookTalk

Dell Pro Max 16/18 Plus (2025 model) pre-release discussion — MB16250, MB18250


Recommended Posts

2 hours ago, Aaron44126 said:

1. If you get an iGPU-only system, you will be able to use the HDMI port and it will be attached to the iGPU.

So, with iGPU and dGPU the table in the Dell docs mean HDMI is connected to dGPU only? I would love to have everythign connected and being usable by iGPU and only offload work of interest to dGPU, if at all.

Link to comment
Share on other sites

42 minutes ago, pickwick said:

So, with iGPU and dGPU the table in the Dell docs mean HDMI is connected to dGPU only? I would love to have everythign connected and being usable by iGPU and only offload work of interest to dGPU, if at all.

 

I think the docs are right with regard to this. I haven't tested this system, but that is how prior generations behaved (Precision 7000 line starting with 7X30, when Dell switched to using DGFF GPUs).

 

If you want everything attached to the iGPU, plug your displays in via the USB-C ports, either via a dock or directly.

  

16 hours ago, pickwick said:

I'm having two external displays, shared by a business and private laptop. One display has a free HDMI and the other one DVI-D. The Dell has native HDMI in theory, which would leave me with an additional DVI-D to be supported. There seem to exist adapters form thunderbolt/USB-C to DVI-D for small prices. I only need to support 1920*1200. Those adapters should work, shouldn't they?

 

USB-C to HDMI or USB-C to DVI (...or even USB-C → HDMI → DVI...) should work fine, and attach to the iGPU.

 

If it were me, I'd just use an USB-C → HDMI adapter as that will have more potential for future use, and then connect it to your monitor with an HDMI→DVI cable if necessary.

Apple MacBook Pro 16-inch, 2023 (personal) • Dell Precision 7560 (work) • Full specs in spoiler block below
Info posts (Windows) — Turbo boost toggle • The problem with Windows 11 • About Windows 10/11 LTSC

Spoiler

Apple MacBook Pro 16-inch, 2023 (personal)

  • M2 Max
    • 4 efficiency cores
    • 8 performance cores
    • 38-core Apple GPU
  • 96GB LPDDR5-6400
  • 8TB SSD
  • macOS 15 "Sequoia"
  • 16.2" 3456×2234 120 Hz mini-LED ProMotion display
  • Wi-Fi 6E + Bluetooth 5.3
  • 99.6Wh battery
  • 1080p webcam
  • Fingerprint reader

Also — iPhone 12 Pro 512GB, Apple Watch Series 8

 

Dell Precision 7560 (work)

  • Intel Xeon W-11955M ("Tiger Lake")
    • 8×2.6 GHz base, 5.0 GHz turbo, hyperthreading ("Willow Cove")
  • 64GB DDR4-3200 ECC
  • NVIDIA RTX A2000 4GB
  • Storage:
    • 512GB system drive (Micron 2300)
    • 4TB additional storage (Sabrent Rocket Q4)
  • Windows 11 Enterprise LTSC 2024
  • 15.6" 3940×2160 IPS display
  • Intel Wi-Fi AX210 (Wi-Fi 6E + Bluetooth 5.3)
  • 95Wh battery
  • 720p IR webcam
  • Fingerprint reader

 

Previous

  • Dell Precision 7770, 7530, 7510, M4800, M6700
  • Dell Latitude E6520
  • Dell Inspiron 1720, 5150
  • Dell Latitude CPi
Link to comment
Share on other sites

to the best of my knowledge when optimus puts dGPU to sleep iGpu will function through hdmi port,
optimus  is not dell's software its nvidia's software designed specifically to switch between the two,
if hdmi is not working it has to be something else preventing the port from being used.

the impossible is not impossible, its just haven't been done yet.

Link to comment
Share on other sites

1 hour ago, MyPC8MyBrain said:

to the best of my knowledge when optimus puts dGPU to sleep iGpu will function through hdmi port,
optimus  is not dell's software its nvidia's software designed specifically to switch between the two,
if hdmi is not working it has to be something else preventing the port from being used.

 

This is the case with most displays, but in these systems, the HDMI port is hard-wired to the dGPU and not the iGPU. (Unless you bought an iGPU-only system and it has the DGFF spacer card installed, whose purpose is to basically route the signal from the HDMI port to the iGPU.) So, connecting a display to the HDMI port will always cause the dGPU to engage.

  • Bump 1

Apple MacBook Pro 16-inch, 2023 (personal) • Dell Precision 7560 (work) • Full specs in spoiler block below
Info posts (Windows) — Turbo boost toggle • The problem with Windows 11 • About Windows 10/11 LTSC

Spoiler

Apple MacBook Pro 16-inch, 2023 (personal)

  • M2 Max
    • 4 efficiency cores
    • 8 performance cores
    • 38-core Apple GPU
  • 96GB LPDDR5-6400
  • 8TB SSD
  • macOS 15 "Sequoia"
  • 16.2" 3456×2234 120 Hz mini-LED ProMotion display
  • Wi-Fi 6E + Bluetooth 5.3
  • 99.6Wh battery
  • 1080p webcam
  • Fingerprint reader

Also — iPhone 12 Pro 512GB, Apple Watch Series 8

 

Dell Precision 7560 (work)

  • Intel Xeon W-11955M ("Tiger Lake")
    • 8×2.6 GHz base, 5.0 GHz turbo, hyperthreading ("Willow Cove")
  • 64GB DDR4-3200 ECC
  • NVIDIA RTX A2000 4GB
  • Storage:
    • 512GB system drive (Micron 2300)
    • 4TB additional storage (Sabrent Rocket Q4)
  • Windows 11 Enterprise LTSC 2024
  • 15.6" 3940×2160 IPS display
  • Intel Wi-Fi AX210 (Wi-Fi 6E + Bluetooth 5.3)
  • 95Wh battery
  • 720p IR webcam
  • Fingerprint reader

 

Previous

  • Dell Precision 7770, 7530, 7510, M4800, M6700
  • Dell Latitude E6520
  • Dell Inspiron 1720, 5150
  • Dell Latitude CPi
Link to comment
Share on other sites

Thanks so far, that was pretty helpful already!

 

Does anyone do AI-stuff with the dGPU and if so, which? I've read a lot about that 8 GiB of VRAM are most likely not enough and lots of pre-built models seem to have the 3K with 12 GiB RAM as well. That should be almost as fast as 5070 Ti, which can be found in many high end gaming laptops, but more like an entry-level card. 🙂

 

Does anyone has any experience with the RTX PRO 3K? Are 12 GiB of VRAM reasonable enough or just a waste of a few hundreds of Dollars most likely?

Link to comment
Share on other sites

https://ollama.com/library/llama3.1

Minimal Requirements for Llama 3.1 8B (Quantized)
VRAM (GPU Memory)    ≥ 8 GB VRAM
System RAM    ≥ 16 GB

 

If you are interested in the larger, more powerful Llama 3.1 variants, 
the requirements scale up dramatically, moving them out of the range of most standard PCs:
Llama 3.1 70B    ≈ 35 - 50 GB VRAM
Llama 3.1 405B    ≥ 150 GB VRAM


Hardware support
https://docs.ollama.com/gpu

Compute capability (CC)
https://developer.nvidia.com/cuda-gpus

the impossible is not impossible, its just haven't been done yet.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use