MyPC8MyBrain
Member-
Posts
651 -
Joined
-
Last visited
MyPC8MyBrain's Achievements
-
this is not an analog grounding case, analog isolation in this context is referring to the circuit design with no relation to "ground" in this spesific context. Dell relies heavily on Intel Dynamic Platform & Thermal Framework (DPTF). Power mode changes trigger firmware level policy changes, not just windows settings. these transitions are abrupt by design to meet thermal and battery targets. this is common on Dell (not so much with other brands) due to tighter power envelopes, Less analog isolation on modern, compact boards. emphasis on efficiency over electrical quietness, older, systems masked this better with larger inductors, heavier grounding, and more conservative power transitions.
-
i agree! my point wasn't to challenge the phenomena just explain the root cause of it so you don't end up chasing windmills. analog Isolation means physically separating two circuits so they don't share electricity or ground which Dell redesign cutbacks and consolidation we seen the past few years is losing modularity in design not adding complexity, hence the circuits designs also getting chopped and consolidated all in name of selling more laptops faster, soon we will pickup laptops like a hotdog from 7-Eleven with same emotions and customization reserved for socks.
-
This is not a defect, not a failing speaker, and not Windows “doing something wrong.” It’s a byproduct of modern mobile power design, aggressive firmware control, and minimal analog isolation. Dell’s recent designs prioritize efficiency and thinness over the kind of electrical overengineering older workstations had. If you want it gone don’t change power modes during playback, or Lock the system to a single power mode, or Use external audio (USB DAC or Bluetooth), which bypasses the internal analog path entirely.
-
Dell used to be a systems company. It built machines around customer needs, real workloads, and long-term relationships. You could spec a system, argue about it, push back on pricing, and eventually land on something that made sense for both sides. It was not perfect, but it was rational. That Dell no longer exists. What replaced it is a pricing engine that reacts to market hysteria rather than fundamentals. Overnight RAM hikes. Claims of “losing money” while sitting on years of inventory. Pressure tactics that look less like negotiation and more like ransom notes. That same mentality now shows up in the hardware itself. Recent Dell platforms, including flagship models, have steadily lost modularity. Fewer replaceable components. More proprietary layouts. Tighter coupling between parts that were once serviceable and upgradeable. None of this improves performance. It improves control. For decades, Dell’s strength at the edge was adaptability. Systems could evolve with workloads. Their useful life could be extended. That is how trust was built. Now that trust is being monetized. Momentum earned over decades is being exploited as the rug is slowly pulled out from under customers who are not at the very top of the enterprise stack. Buy what is offered, at the price dictated, and replace it sooner. That is the new model. This is not leadership in edge technology. It is brand liquidation. When modularity disappears, long-term value disappears with it. Hardware stops being an investment and becomes a disposable purchase. At that point, Dell is no longer chosen for engineering or reliability, but for convenience. That is how a systems vendor turns into shelf hardware. Pick it up if it is there. Discard it when it is not. So this is not anger. It is an obituary. Dell had decades of goodwill, engineering credibility, and customer trust. It traded that in for short-term extraction during a bubble. It was good while it lasted. But the Dell that built systems is gone.
-
The AI boom is more overhyped than the 1990s dot-com bubble...
MyPC8MyBrain replied to Papusan's topic in Tech News
-
This year’s balance sheet and sales projections coming out of Dell have to be some of the worst I’ve seen in decades. I honestly don’t know who over there drank the Kool-Aid, but the disconnect from reality is getting absurd. Dell pushed another round of price increases at the start of this week. This time it wasn’t just servers. It hit the consumer side as well. Because our order includes consumer-grade components, we got nailed with roughly a 30% jump on desktops and laptops, right after they already hiked server pricing a few weeks ago. To put numbers on it: a very basic configuration, i9-285, 16GB DDR5, 512GB NVMe, which was already overpriced at $1218, is now sitting around $1500. And they’re acting like this is some kind of favor due to “market conditions.” Meanwhile, HP is selling the same class machine on their public website, no bulk discounts, no backroom pricing, no end-of-year promos, with higher specs, i9-285, 32GB DDR5, 512GB NVMe, for $1299. That’s what any random person sees when they go online and compare systems. So Dell’s “heavily discounted” bulk pricing is now more expensive than HP’s public retail pricing with better hardware. Let that sink in. This isn’t about RAM shortages anymore. This is pricing discipline gone off the rails, and it’s happening at the worst possible time. If this is what Dell thinks the market will tolerate going into next year, they’re in for a rude awakening.
-
The AI boom is more overhyped than the 1990s dot-com bubble...
MyPC8MyBrain replied to Papusan's topic in Tech News
One thing that is not being talked about enough is how much actual technology progress may end up stalled because of this. The AI boom is locking in massive, multi year capital commitments based on today’s hardware and today’s software assumptions. Billions are being committed now, but supply takes time. By the time much of this equipment is delivered, it will already represent a frozen generation. That creates an incentive problem. If NVIDIA advances the architecture too aggressively while these contracts are still being fulfilled, it risks legal exposure from customers who just spent enormous money on hardware that was positioned as long term viable. If it does not advance fast enough, it risks falling behind competitors and alternative architectures. The safest path, from a legal and financial standpoint, is to slow real architectural change while extracting as much value as possible from the current generation. That is not how technology normally progresses. Historically, hardware moves forward because the next thing makes the previous one clearly obsolete. In this cycle, progress is constrained by the need to protect sunk costs and contractual commitments tied to an artificial scarcity model. What makes this worse is that the hardware being purchased now is tightly coupled to the current non coherent memory and CUDA centric software stack. If a materially better memory model arrives in the next few years (like CXL), large portions of today’s AI infrastructure could become inefficient overnight. That puts vendors in a bind. Advance too fast and you anger your biggest customers. Advance too slowly and you turn the boom into a dead end. Either way, there is a real risk that we are not just inflating prices, but also delaying the next meaningful architectural step, because the money is already committed to preserving the current one. That may end up being the most expensive part of this cycle. -
The AI boom is more overhyped than the 1990s dot-com bubble...
MyPC8MyBrain replied to Papusan's topic in Tech News
the first AI bubble domino topple imminent -> ORACLE -
MyPC8MyBrain started following The AI boom is more overhyped than the 1990s dot-com bubble...
-
The AI boom is more overhyped than the 1990s dot-com bubble...
MyPC8MyBrain replied to Papusan's topic in Tech News
Most of the hype here isn’t driven by fundamentals. It’s emotional reactions to buzzwords and shallow usage. People assume that because they can open ChatGPT on their phone, they somehow understand AI. That’s the same herd mentality behavior that helped inflate the dot-com bubble. The reactions in this thread didn’t form on their own. They’re a direct result of social media amplification and the fact that anyone can now gain visibility and perceived authority almost instantly. Back in the dot-com era, hype was pushed by analysts, executives, and traditional media. Today it’s pushed by algorithms that reward confidence, oversimplification, and spectacle. A viral LinkedIn post or YouTube short claiming “AI will replace everything” carries more influence than balance sheets, margins, or real deployment costs. That’s why surface-level use gets mistaken for expertise. Access is confused with understanding. Using a smartphone, installing apps, or typing prompts doesn’t mean understanding model limits, infrastructure costs, power requirements, or long term ROI. Social platforms collapse all of that nuance. This is classic bubble behavior - success stories spread instantly - skepticism gets labeled as anti-progress - valuations drift away from fundamentals as perception outruns reality AI clearly has real and useful applications. But the widespread certainty that it will effortlessly do everything while printing money forever is being reinforced at scale. That’s why this feels worse than the dot-com bubble. The hype engine is faster, louder, and global, while economic gravity hasn’t changed. -
For additional context, this quote did not come out of a single email or a casual exchange. This was after two full days of back and forth where I refused to accept a 100 percent overnight RAM increase on an active enterprise order. I escalated the issue beyond account management to regional and global executives and made it clear I was not going to pay what amounted to ransomware pricing. After that escalation, I pulled the entire invoice off the table. Several hundred thousand dollars in equipment was withdrawn from the deal. that's when I receive the email response I am quoting below from Dell’s regional sales manager. At that point the position was clear. This was not a negotiation and not a supply issue. It was a decision. Dell chose to hold pricing rather than keep the business, and as a result they lost the entire invoice to a competing vendor. That is why the explanation does not hold up. Dell did not suddenly incur double costs overnight. There was no emergency restock at inflated pricing. This was about margin protection and inventory allocation, not cost recovery. When vendors are willing to walk away from large enterprise orders rather than adjust pricing, it tells you where the incentives really are. Traditional customers are no longer the priority when the same hardware can be redirected into higher-margin channels tied to AI demand. That is not a RAM shortage. That is price steering. And this is exactly why buyers need to push back. Once customers accept this as normal, it stops being a market distortion and becomes policy.
-
The AI hardware market isn’t about raw compute anymore. What’s really happening is that it’s being exploited because of a memory problem that’s been baked into x86 PCs and servers for decades and never fixed. Here’s the core of it. CPU and GPU memory don’t talk to each other properly. They sit in completely separate pools, and any time the GPU needs data from system RAM, it has to copy it over the PCIe bus, sync it, process it, then copy it back. This isn’t really a bandwidth problem. It’s latency, coordination, and a mess of software complexity, and it gets worse as AI models get bigger. That’s why VRAM has become a hard gate for anyone trying to run AI locally. If your model doesn’t fit inside the GPU’s memory, you are basically dead in the water, even while massive chunks of system RAM sit idle. Apple Silicon shows this isn’t a law of physics. With unified, coherent memory, CPU and GPU work on the same data without copies. Memory isn’t owned by one device. It is shared. Software gets simpler, latency drops, and efficiency jumps. The only limit is how much memory the product ships with, not the architecture itself. This is exactly where the industry should be heading and where CXL comes in. CXL isn’t a faster PCIe bus. It is a coherency protocol that lets CPUs, GPUs, and memory expanders share the same memory pool. Instead of forcing GPUs to hoard memory locally, you can treat system RAM as a shared resource. Models don’t need to be duplicated per GPU anymore, and scaling becomes a matter of adding compute, not copying memory. It doesn’t magically make DDR as fast as HBM, and latency doesn’t disappear. But it removes the need to constantly move data around just to make accelerators work at all. This is also why CUDA exists. CUDA thrives because it gives developers control over isolated memory domains, which is exactly what current hardware forces you to do. CUDA didn’t create the problem. It just optimized around it. But the moment memory becomes coherent and shared, a lot of CUDA’s “must-have” control starts to matter less. You move from orchestrating memory to scheduling compute, and suddenly the advantage of isolated memory shrinks. NVIDIA knows this, and their strategy locks the status quo in place. They cannot offer unified system memory on x86 today, so instead they pack GPUs with ever-larger VRAM. That is not luxury. It is a workaround. NVLink exists for multi-GPU coherence, but it is mostly limited to servers. High-end workstations ship massive VRAM but no real way to pool it coherently. You are forced to manage memory manually or pay an enormous premium for server-grade gear. It is not a mistake. It is a product boundary. The result is clear. The AI boom is built on duplicated memory, forced copies, and inflated costs. NVIDIA isn’t just winning because of fast GPUs or CUDA. They have built an ecosystem around an architectural flaw. CXL is the first step to fixing it. It will not flip the market overnight, but coherent memory as a first-class system resource will eventually shift the balance. Control over capacity matters less, efficiency of compute matters more. Right now, the industry is paying a premium for an architectural flaw that NVIDIA has learned to monetize with surgical precision. That is the real NVIDIA tax.
-
The mess we’re seeing in RAM pricing didn’t come out of nowhere. It’s the end result of a chain reaction that started with two companies most people have never heard of. The Rabbit Hole begins in Spruce Pine, North Carolina, where Sibelco and The Quartz Corp. run the mines that produce almost all of the ultra-pure quartz used to make the crucibles for growing silicon ingots. This material is so clean and rare that the entire semiconductor supply chain depends on it. No quartz means no crucibles. No crucibles means no wafers. No wafers means no chips. It is one of the most fragile choke points in modern technology, and nobody paid attention to it until Hurricane Helene hit in September 2024. Power went out, roads were flooded, and both mines were shut down for weeks. That alone was enough to push every major wafer supplier into allocation almost immediately. Shin-Etsu, SUMCO, GlobalWafers, Siltronic; all of them tightened supply heading into 2025 because their raw material pipeline had stalled. Once the upstream pressure hit the wafer producers, the downstream consequences landed in the laps of Samsung, SK Hynix, and Micron. These three control roughly ninety-five percent of the world’s DRAM output and effectively the entire supply of HBM. Their order books were already stressed, and the gap between normal PC demand and AI demand turned into a canyon. Consumer DDR5 grows at a predictable pace. AI customers are throwing money at HBM3 and HBM4 and are willing to pay five to ten times the margin of desktop memory. Faced with that imbalance, the big three did what they always do when margins are skewed. They shifted most of their advanced DRAM capacity toward HBM and server grade DDR5. The consumer market was left with scraps. Prices didn’t rise by accident; the shortage was engineered by capacity decisions. December contract pricing jumped nearly 80 to 100 percent in a single month. Retail followed just as fast. Kits that cost around one hundred dollars in midsummer now sit closer to two hundred fifty and climbing. Even DDR4 is riding the same wave as older lines get repurposed or shut down. The ripple effect is already hitting system builders. Prebuilt vendors have announced fifteen to twenty-five percent price increases and directly named memory costs as the reason. And this isn’t the ceiling. Current projections show constrained supply stretching well into late 2027. New fabs take years to build and qualify. Meanwhile AI demand refuses to slow. Boiled down, the Spruce Pine shutdown was the trigger, but the runaway market we’re seeing now is the result of Samsung, SK Hynix, and Micron chasing AI margins and letting the consumer channel absorb the damage. It mirrors the seventies oil shock, except the “OPEC” here is three semiconductor giants who don’t need to hide their strategy. RAM has become digital oil, and the price at the pump just doubled. If you plan to upgrade, do it now, because this trend isn’t turning around anytime soon.
-
That’s exactly the direction this is heading, and it’s not even incompetence it’s intentional. Everyone thinks “8GB is too little,” but for the next wave of OS designs, 8GB is plenty if the UI is nothing more than a thin shell that boots straight into your online account. Local compute becomes irrelevant when everything you do gets pushed through a remote service. First it was games. Then it was telemetry. Then “cloud integration.” Now the hardware footprint itself is being shrunk to force people deeper into online ecosystems where every click, scroll, and purchase is monetized. You’re not buying a machine anymore you’re buying an access terminal that feeds them data. And that’s the endgame reduce BOM costs, cut local capability, and make sure every user especially the non-technical ones has no choice but to live inside a walled garden. It’s the most reliable revenue model they’ve ever had, and they’re going to push it hard. The cheap laptops won’t just be weak. They’ll be weak on purpose. It’s insane when you step back and look at it. We spent decades moving forward empowering people with real machines, real autonomy, real capability and now the entire industry is dragging us backwards so the richest companies on earth can squeeze profit out of every possible angle. People forgot what “PC” even stands for. It was Personal Computing. Local. Independent. Yours. What they’re pushing now is basically Public Computing thin clients dressed up as laptops, everything routed through someone else’s servers, someone else’s rules, someone else’s monetization engine. It’s the opposite of progress. It’s not innovation; it’s consolidation disguised as convenience. And the saddest part? Most people won’t realize what they lost until it’s completely gone and irreversible.
-
Quick update on where things stand and honestly, the market’s taken another turn for the worse in just the past few days. It’s not just RAM anymore. Everything upstream is getting hammered. Even copper futures spiked hard, which tells you all you need to know about where the manufacturing and logistics chain is heading. When raw materials start jumping like that, everything that depends on them follows: PCBs, power delivery, cabling, server chassis, networking hardware, all of it. The pricing pressure we’re seeing from Dell isn’t isolated. It’s a symptom of a bigger storm that’s been building quietly for months. AI build-outs have vacuumed up supply, fabs are oversubscribed, and now even the industrial commodities behind the hardware are taking hits. That’s a perfect recipe for a market where consumers, pro-summer, small businesses, labs, repair shops, everyone getting squeezed hard. And the worst part is the feedback loop; higher costs mean fewer purchases, fewer purchases mean lower volume, lower volume means even higher per-unit costs. This snowball is rolling downhill fast, and unless something breaks the cycle, we’re all about to pay “entry fees” for basic compute that would’ve sounded insane five years ago. the trend is obvious; we’re heading into a hardware market where everything costs more, performs the same, and arrives slower. The timing couldn’t be worse.