Jump to content
NotebookTalk

4000 series set to be 150 percent faster than previous 3090


ryan
 Share

Recommended Posts

just saw an ad on my phone saying the new 4000 series will require a 1500w power supply!! thoughts?

oh and it gets released in sept

https://www.tomsguide.com/news/nvidia-geforce-rtx-4000

According to what Wccftech calls “reliable leakers," the new hardware will likely pack a punch. There’s talk that the Nvidia GeForce RTX 4090 flagship card will be powered by the Ada Lovelace AD102 and come with 18,432 CUDA cores with a clock speed as high as 2.5GHz and 92 teraflops of compute performance.

For comparison’s sake, the current top-end Nvidia GeForce RTX 3090 card packs 10,752 CUDA cores, a 1.6GHz clock speed and around 37 teraflops of compute power.

Link to comment
Share on other sites

I'll believe it when I see it and take a huge grain of salt for now.

 

WCCFTech is at the lowest rung of reliability for rumors in my book.  Sure, sometimes they come true, but when you publish every rumor of course some of them are going to come true.  And that's not anything specific about nVIDIA, they were all about the AMD rumors when they were a new site, and sure enough a few of them came true and a lot of them didn't, or were only partially accurate.

 

But even if the "best case" comes true, and the high-end-to-high-end comparison is 2.5 times as fast?  One, it's going to cost a small fortune.  And two, 450W - 850W of power usage?  I don't want that card.  Not only the power bill and environmental irresponsibility, but that's a lot of heat to disperse and would not fit in my quiet PC from an acoustic standpoint.  AMD and nVIDIA have been chasing the performance crown at the expense of power efficiency for years, and if these grain-of-salt rumors wind up being true, nVIDIA is turning that up to 11 with their newest cards.

 

I've still got 8.5 years of warranty on my 650W Seasonic PSU.  No point in putting a video card that blows the whole power budget all by itself in there.

  • Like 1
  • Bump 1

Desktop: Core i5 2500k "Sandy Bridge" Processor | RX 480 8 GB | 32 GB DDR3 | 850 Evo + Several HDDs | 8.1 Pro

Laptop: MSI Alpha 15 | Ryzen 5800H | Radeon 6600M | 16 GB DDR4 | 512 GB SSD | 10 Home

Laptop history: MSI GL63 (2018) | HP EliteBook 8740w (acq. 2014) | Dell Inspiron 1520 (2007)

Link to comment
Share on other sites

I agree with everything you just said.......but I couldnt help noticing the "turn it up to 11" reference :classic_cool:

spinal-Tap-11-600x336.jpg

 

This IS SpinalTap!

  • Bump 2

Thunderchild // Lenovo Legion Y740 17" i7-9750H rtx2080maxQ win10 

Rainbird // Alienware 17 (Ranger) i7-4910mq gtx980m win10

Link to comment
Share on other sites

There's a friend of mine that is quite how do I say this, firm maybe? about his own opinions, the point is, a couple of years ago we were having a conversation amidst some hotdogs and beer, and he tried to convince me of the tick-tock model (according to him applied to general terms in technology) where the current tock was about power efficiency and that the future would be focused on this characteristic. Suffice to say, that I at that time was already an owner of a laptop with an RTX 2070, capped by the Max-Q design to fit the thermal and power constraint. I left him to be happy by agreeing with him after trying several times to prove to him that the Turing generation was indeed an increase in terms of performance, but still with a considerable power consumption increase. Maybe in CPUs this trend is getting better, more performance, less power, but with Nvidia you will have to get a dedicated industrial grade power supply to your house in the near future.

 

I think I will have this conversation again with him just to troll him.

  • Thumb Up 1
Link to comment
Share on other sites

Yeah, I'm actually slightly rebuilding my rig at the moment and going mini-iTX. I have the mobo and case/psu/AIO on the way, it's going to replace my micro-ATX. Just moving the CPU+GPU and everything into the new case. The reason I'm bringing this up is SFF is appealing to me and the 40 series seems to be extremely SFF averse. the 3090 is 350W TBP.. you can easily get away with that in an SFF build using an 750-850W PSU and a sane CPU. WIth these 40 series heading north of 450W we're going to need some serious SFF PSU advancement. I haven't seen one over 850W. You could probably juggle your CPU and 40 series GPU and make an 850W PSU work fine but when 850W is starting to hit limits we're in trouble. Never mind that to get the most efficiency you usually have to run it about 60-70% load unless it's platinum or titanium rated. SFF builds are about to get STEAMY.

  • Thumb Up 1

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

15 hours ago, ryan said:

@hfm I thought you were the guy with the gram and egpu? I have the 3060 and was thinking egpu with 4090 via TB4 next year, is it worth it or do you lose too much performance

 

I sold the eGPU built an SFF desktop. Still love the Gram though!

 

TB4 isn't very eGPU friendly, they will all still run from TB3 for the foreseeable future until some better standard comes along.

 

 

  • Thumb Up 2

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

A contractor I was working with until recently (he quit) told me he spent like 5 grand on his desktop and was visibly angry that you dont need 1000w to run a 3090, or 128GB of sRAM to play Apex Legends. I mean if you have intention to use that power then more power to ya, but it just seems more wasteful than typical ePeen allowance.

 

Im not sure where people get the crazy high numbers for their power requirements though. seems an oddity, but maybe thats due to me building ITX or using laptops. Its actually been a looooooong time since I used a Full Tower.

Telegram / TS3Twitter

 

 

Link to comment
Share on other sites

yeah I have to agree, I don't see 1500w being a requirement. unless they are like 800w gpus pretty big jump from 400-350.

 

also @hfm isn't thunderbolt 4 more efficient than 3. I heard you get better performance as per notebookchecks review. nothing like a graphics amplifier but decent. I hope im not stuck with a 3060 for 5 years I dont mind 2 years but 5? I just can't afford a new laptop every year, the reason ive gone through so many is just selling and upgrading...lost alot of money doing so.

Link to comment
Share on other sites

16 hours ago, ryan said:

yeah I have to agree, I don't see 1500w being a requirement. unless they are like 800w gpus pretty big jump from 400-350.

 

also @hfm isn't thunderbolt 4 more efficient than 3. I heard you get better performance as per notebookchecks review. nothing like a graphics amplifier but decent. I hope im not stuck with a 3060 for 5 years I dont mind 2 years but 5? I just can't afford a new laptop every year, the reason ive gone through so many is just selling and upgrading...lost alot of money doing so.

 

For eGPU, the most important part is that the controller is on die and has access to CPU directly. Which they all have since I think 10th Gen Intel mobile CPUs. As indicated by Sonnet there, eGPU will continue to use TB3.

 

As far as PSUs go, unless you bought 80 Plus platinum or titanium, most of them are at their best efficiency around 50-60% load, probably not a bad thing to overdo it a little on the PSU. As well, modern CPUs can consume gobs of wattage while boosting if you can keep them cool and deliver the current. 

  • Thanks 1

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

  • 3 weeks later...
On 3/31/2022 at 5:30 PM, ryan said:

if its using only 1 lane i'd imagine it would be pretty slow but after seeing tb4 vs tb3 it looks like tb4 is slightly faster that video can't be right

 

Define slightly faster? For eGPU usage specifically there's two things that are crucial

  1. The thunderbolt controller has direct CPU access to limit latency, pretty much all of them do for Intel 10th Gen mobile and forward, for instance my 8th Gen Gram 17 has a separate discrete vs embedded TB3 controller (Alpine Ridge) which robs a little performance.
  2. It uses 4 lanes of PCIe to maximize bandwidth to the GPU. As stated in that Sonnet video, TB4 specifically has only 1 lane available, so it reverts back to TB3 to support these devices, which is fine. Unfortunately this is how Intel designed the spec.

Perhaps in the future we'll get a new standard with 8 lanes of PCIe (TB5 or USB5...?) , something like the solution Asus is using for their XG Mobile interface (PCIe 3.0 x8).

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

its just from old memory the thunderbolt 4 comparison I saw showed 4 ahead of 3.....I don't remember the article but thats what i remember....im obviously wrong if you cling to this....ill do some youtube searching and report back

 

 

 

 

 

sorry @hfm your usually right but its looking like tb4 is 25 to 50 percent faster......the video you linked is garbage

 

 

these reviewers straight up tested games performance and skipped the mumbo jumbo tech talk.....yeah thunderbolt uses 1 lanes blah blah but ironically its faster in games and at the end of the day thats all that matters

  • Thumb Up 1
Link to comment
Share on other sites

1 hour ago, ryan said:

its just from old memory the thunderbolt 4 comparison I saw showed 4 ahead of 3.....I don't remember the article but thats what i remember....im obviously wrong if you cling to this....ill do some youtube searching and report back

 

 

 

 

 

sorry @hfm your usually right but its looking like tb4 is 25 to 50 percent faster......the video you linked is garbage

 

 

these reviewers straight up tested games performance and skipped the mumbo jumbo tech talk.....yeah thunderbolt uses 1 lanes blah blah but ironically its faster in games and at the end of the day thats all that matters

 

I believe these 10th gen parts are probably still external TB controllers not embedded. Perhaps it was 11th Gen that had those and I'm a generation off. This could also just be down to 11th gen parts are faster in general.

EDIT: 10th gen is where the embedded on-die TB controllers were introduced, specifically in the 4core ultrabook line (H series didn't get it till 11th gen)

It would take digging into the laptops used and how the architecture of the chip has changed.

 

One, that H CPU in the Razer on the 2nd video, it has an off-die controller. A on-die 4 core will beat a 10th gen H series EVERY DAY over TB, reduced latency. Again this has NOTHING to do with TB4, it's still using 32Gb/s 4-lane TB3 the same as any eGPU on TB that supports 4-lanes. This is down to any efficiency gained in the TB controller implementation in the CPU die, then given those are equal any CPU speed improvements generation to generation. That razer comparison in Jarrod's video, I wouldn't be surprised to find out that Razer's 10th gen implementation used on off-die controller.

 

This has NOTHING to do with TB4 vs TB3, all eGPU are THUNDERBOLT 3. There is NO SUCH THING as a Thunderbolt 4 eGPU, they all use TB3. protocol.

 

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

oic hfm i see your point!

 

but with tb4 they are seeing a big jump in performance, and if its the same you wouldnt see such a big jump..

actually im confused...so your saying my 11800h and tb4 will be slower? faster cpu and equal tb speed=less performance because its??/???

Link to comment
Share on other sites

13 minutes ago, ryan said:

oic hfm i see your point!

 

but with tb4 they are seeing a big jump in performance, and if its the same you wouldnt see such a big jump..

actually im confused...so your saying my 11800h and tb4 will be slower? faster cpu and equal tb speed=less performance because its??/???

It has very little to do with individual component performance and everything to do with efficiency between components.

 

This is my current uninformed understanding of course.

Telegram / TS3Twitter

 

 

Link to comment
Share on other sites

yeah it doesnt make alot of sense.... slower cpu performance better performance due to wiring...

 

 

kinda confused....so all egpus are tb3 but tb4 actual connection is faster?

 

misleading youtube videos

Link to comment
Share on other sites

3 hours ago, ryan said:

oic hfm i see your point!

 

but with tb4 they are seeing a big jump in performance, and if its the same you wouldnt see such a big jump..

actually im confused...so your saying my 11800h and tb4 will be slower? faster cpu and equal tb speed=less performance because its??/???

 

11800H has an on-die Thunderbolt controller IIRC, so it shouldn't suffer from interconnect latency as the TB controller has direct access to CPU and doesn't need to travel through the north bridge. It will still be limited to Thunderbolt 3 bandwidth.  It DOES NOT MATTER if it's a TB4 controller, it's going to use TB3 mode for an eGPU. What Sonnet says is true, TB4 is not built to handle eGPU scenario. I would think Sonnet knows what they are talking about, they are one of the premier manufacturers of eGPU solutions. If building a TB4 client eGPU solution was going to be beneficial they would definitely have done it already. I don't know why you don't believe their video explanation of why they are not.

 

Another very popular way of handling this is using NVMe to PCIe eGPU solutions as the NVMe interface doesn't have the same bandwidth restriction as TB, you get the full 4 lanes of bandwidth. It's a little unwieldly as you sometimes need to do a little modding to make it look seamless, but lots of people have done it to get better eGPU performance. It's still going to pale in comparison to a good dGPU in a laptop no matter what you do.

 

EDIT: Especially if that dGPU can use a MUX switch to avoid the optimus penalty. I still think for instance a 3080 eGPU is going to beat a lower tier dGPU, especially at 4K where things are mostly GPU limited. Some games are more taxing on PCIe bandwidth than others as well, it varies game to game sometimes. That's where you see some of the benefits of on-die TB controllers when that latency really matters, it's a lot of times use-case or game dependent how much it matters.

  • Thumb Up 1

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

4 hours ago, Reciever said:

@hfm iirc on NBR you had been using eGPU as your primary means to game for a meaningful segment of time right?

Yeah I had a my Gram 17 (which I still use at least once a week) + a Sonnet 550 with a 2070 in it for a while. Worked pretty well even with the 15W 8th Gen CPU. But now that I'm working from home every day I decided to build an SFF desktop instead.

  • Thumb Up 1

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

1 hour ago, ryan said:

yeah it doesnt make alot of sense.... slower cpu performance better performance due to wiring...

 

 

kinda confused....so all egpus are tb3 but tb4 actual connection is faster?

 

misleading youtube videos

It has to do with whether or not the TB signal has to go through any mediums before communicating with the CPU and vice-versa. Those small interruptions add up and take space to be sent. As I have read it before (but unaware of accuracy) TB4 is merely what TB3 should've been, as the speeds did not increase generation over generation.

  • Thanks 1

Telegram / TS3Twitter

 

 

Link to comment
Share on other sites

oic geeesh..

 

I was under the impression newer is better lol? sarcasm

 

Yeah so I'm stuck with TB4 does this mean im only getting 1 lane and egpu will be slow with the 11800h? because I really was expecting to use a egpu in 3 years time...and thanks for clearing the confusing mess @Reciever....your alot smarter than me but don't let it get to you 😛 my dads really smart like you guys and alot of times I have to ask him questions about computers/science...so much false information out there

Link to comment
Share on other sites

I wouldnt say that I am smart more so I always allow for hedging my statements because I dont know everything. 

 

Some context may be lost but I did recall @hfm using an eGPU scenario and based on his posts I never really took him for someone that settles for anything less than desired, I myself have been curious about eGPU's for some time as well although havent pursued it personally. 

  • Thumb Up 1

Telegram / TS3Twitter

 

 

Link to comment
Share on other sites

10 hours ago, ryan said:

oic geeesh..

 

I was under the impression newer is better lol? sarcasm

 

Yeah so I'm stuck with TB4 does this mean im only getting 1 lane and egpu will be slow with the 11800h? because I really was expecting to use a egpu in 3 years time...and thanks for clearing the confusing mess @Reciever....your alot smarter than me but don't let it get to you 😛 my dads really smart like you guys and alot of times I have to ask him questions about computers/science...so much false information out there

 

No, it will fall back to TB3 protocol for eGPU, I said that a couple times already. I shouldn't say "fallback" but, use TB3 mode.

[Draupnir] R5 5600X | EVGA 3080 | Asus B550-I Strix | 2 x 16GB 3600@CL16 | 2 x WD SN750 | NZXT H1 V2 w/ Fan Mods

[Gungnir]  LG Gram 17 | i7-8565U | 16GB | 2 x 512GB SSD

[Munnin]  MacBook Pro 16 | i9-9980HK | Radeon Pro 5300M | 64GB RAM | 1TB

Link to comment
Share on other sites

oic ok.....but I guess this has vered off topic but do you think its worth it? like when the 5k series comes out will I see a bump in gaming at 4k or am I better just building a desktop? kinda like what you did

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use