Jump to content
NotebookTalk

MyPC8MyBrain

Member
  • Posts

    664
  • Joined

  • Last visited

Everything posted by MyPC8MyBrain

  1. the erosion of our brains started long time ago when PDA's and Smartphone first arrived on the scenes over 2 decades ago.
  2. as they should be, Substrate Lithography next-generation semiconductor foundry - https://substrate.com/
  3. its much simpler to answer this question from the other direction, when was the last time Microsoft did something right? the answer to that is simple, since XP. everything since has been an ongoing lame attempt after another in converting PC's as in personal computers into public money grabbing kiosks.
  4. No, AI is not what is causing the damage (Jensen). What is causing the damage is deliberate market manipulation that was planned years in advance (Jensen). Decisions like removing NVLink from workstation-class hardware (Jensen) were not technical necessities. They were strategic moves designed to ensure the hardware could only be used properly in the specific ways that extract the most value from customers (Jensen). That is not AI hurting society. That is vendors engineering artificial constraints to maximize rent extraction. Blaming AI is a convenient deflection (Jensen). The damage comes from choices made by people who were trusted with the keys to the ecosystem (Jensen). Over two decades, that trust was built carefully. The moment leverage became absolute, it was abused (Jensen). This is no longer about innovation. It is about control, segmentation, and rent-seeking behavior disguised as progress. History is unforgiving to this pattern. Companies at the peak of perceived indispensability (Jensen) convince themselves the fall cannot happen. It always does. The bigger the pedestal, the harder the landing. Today’s industry heroes (NVIDIA) have a habit of becoming tomorrow’s cautionary tales. AI is just the excuse. The real problem is greed dressed up as inevitability, Mr. Huang.
  5. CES is over, and look at what actually showed up. One low-tier new Intel CPU. One mildly boosted AMD SKU. No new GPUs. No real platform shifts. No meaningful architectural progress. For an industry supposedly in the middle of the biggest computing revolution in decades, this was a staggeringly thin showing. If this were a classroom, you would not call it cutting edge. You would call it remedial at best. Lots of repetition, very little advancement, and an uncomfortable sense that everyone is stalling for time. That is not because innovation suddenly got harder. It is because the market is locked into monetizing what already exists. When margins come from scarcity, price control, and segmentation, there is no urgency to move fast. So instead of bold platforms, we get SKU shuffling. Instead of new architectures, we get minor frequency bumps. Instead of GPUs, we get silence. This is what an industry looks like when it is more focused on managing demand than pushing capability. What makes this year stand out from previous is not that the products were underwhelming. It is that there was no clear sense of direction. Even going back to the 1980s and early 1990s, CES consistently showed forward motion. The hardware was primitive, but the trajectory was visible. New buses, new form factors, new categories, and clear signals about where platforms were heading next. This year did not feel like that. This felt like a strategic pause. An industry more focused on managing inventory, pricing, and segmentation than on moving the platform forward. When margins are being extracted from scarcity and control, there is little incentive to accelerate change. So instead of architectural shifts, we get SKU reshuffles. Instead of new GPUs, we get silence. Instead of platform evolution, we get minor bumps that preserve the status quo. That is not how CES has historically behaved, even in slower decades. There may have been weaker products in the past, but there has rarely been a show with so little visible intent to move computing forward. And that ties directly back to everything above. When markets are optimized for monetization over progress, innovation slows. Not because it is impossible, but because standing still has become more profitable than moving ahead.
  6. I’m still skeptical. It remains to be seen how close they’ll actually get to “back to their roots.” So far, the messaging only references XPS and implicitly Precision via the new naming convention. Both lines have earned multi-generation followings, but after the recent roller coaster, there’s real ground to make up to regain trust. And that trust isn’t tied to a name. It’s tied to a philosophy one they’ve been slowly eroding over the past few years.
  7. they could just used Pro for the latitude equivalent and leave Precision as Precision for the higher tier up as it already was, so essentially the logical rename would be Latitude to Pro if a name change was so necessary, what is the equivalent entry point Inspirons named now?
  8. Precision = Pro, calling it "Pro Precision" is redundant especially since theres no non Pro Precision model. cup half full, its better late then never. at the least they came to some senses before they completely blow it out.
  9. There is another consequence coming out of this landscape that few vendors seem to be accounting for. This pricing behavior fundamentally changes upgrade psychology. For decades, the PC and workstation market relied on a simple pattern. CPUs advanced, prices were tolerable, and users chased the next generation because the platform cost made sense. Memory was an afterthought, not a gating factor. That is no longer true. When customers are forced to buy systems loaded with 64GB, 96GB, or 128GB of memory just to remain functional, that memory stops being expendable. It becomes the anchor. The sunk cost people protect. The result is predictable. Instead of chasing a new CPU with marginal gains, users will sit on older systems that are already fully populated with expensive memory. Upgrading to a new platform now means rebuying RAM at inflated prices for single-digit performance improvements. That math does not work. So unit sales slow. Refresh cycles stretch. The traditional excitement around new CPUs fades. The market shifts from progression to entrenchment. Ironically, the attempt to extract more value per system accelerates the opposite outcome. Fewer systems sold. Longer lifespans. Less incentive to move forward. In practical terms, memory pricing has become a brake on innovation. People will not discard fully loaded, capable machines just to step into a new socket with unaffordable RAM and negligible gains. They will optimize around what they already own. That is how a market stops moving forward not because technology stalled, but because economics made progress irrational. There is a reason this kind of behavior was pushed out of civilized markets in the first place. Pillage-and-plunder economics do not create growth. They extract value until the system stops moving. That model was understood, corrected, and regulated out of functional societies because it destroys trust faster than it creates profit. What is happening now feels like its quiet return. Not through force, but through leverage. Not through scarcity, but through manufactured constraints. While attention was elsewhere, the same behavior re-emerged the moment the market gave cover. When customers are told that doubling prices on inventory already sitting in warehouses is “unavoidable,” when modularity is removed to lock in replacement cycles, when architectural inefficiencies are preserved because they are profitable, that is not innovation. It is extraction. Markets tolerate this only briefly. Once buyers adjust behavior, upgrades slow, alternatives are explored, and loyalty evaporates. The damage does not show up in quarterly reports at first. It shows up later, when momentum is gone and trust cannot be rebuilt on demand. History is consistent on this point. Systems do not collapse when people complain. They collapse when people quietly stop participating. And that is where this trajectory leads if it continues unchecked.
  10. it is clear how much thought and high level engineering placed in the new platform, placing the dGpu right under the CPU is clutch.
  11. Traditional industries like tires, rubber, steel, power, and aviation learned their lessons the hard way through catastrophic failure. You cannot ship a “minimum viable tire,” push a subscription brake pedal, or break interchange standards every 18 months and expect society to function. Physics, liability, and real-world consequences forced discipline. Standards became sacred because lives depended on them. The modern IT industry escaped that crucible. Software abstracts consequences until they’re diffuse, delayed, or offloaded onto users. That created moral hazard. When failure doesn’t immediately kill someone or bankrupt the manufacturer, executives optimize for lock-in, rent extraction, and quarterly optics instead of durability, interoperability, and stewardship. Subscription everything, forced obsolescence, cloud dependency, vendor captivity none of this would survive five minutes in a regulated physical industry. Calling it “innovation” is a fig leaf. Much of it is controlled degradation: removing user agency, breaking working systems, centralizing control, and monetizing dependency. Historically, societies only tolerate this until the hidden costs surface systemic outages, security collapses, economic drag, loss of skills, and institutional fragility. At that point, the response is not polite market correction; it’s regulation, breakup, or replacement. They are not criminals in a legal sense yet. But they are reckless custodians of critical infrastructure, behaving as if software is a toy rather than the nervous system of modern civilization. History is unkind to people who confuse temporary leverage with permanence.
  12. this is not an analog grounding case, analog isolation in this context is referring to the circuit design with no relation to "ground" in this spesific context. Dell relies heavily on Intel Dynamic Platform & Thermal Framework (DPTF). Power mode changes trigger firmware level policy changes, not just windows settings. these transitions are abrupt by design to meet thermal and battery targets. this is common on Dell (not so much with other brands) due to tighter power envelopes, Less analog isolation on modern, compact boards. emphasis on efficiency over electrical quietness, older, systems masked this better with larger inductors, heavier grounding, and more conservative power transitions.
  13. i agree! my point wasn't to challenge the phenomena just explain the root cause of it so you don't end up chasing windmills. analog Isolation means physically separating two circuits so they don't share electricity or ground which Dell redesign cutbacks and consolidation we seen the past few years is losing modularity in design not adding complexity, hence the circuits designs also getting chopped and consolidated all in name of selling more laptops faster, soon we will pickup laptops like a hotdog from 7-Eleven with same emotions and customization reserved for socks.
  14. This is not a defect, not a failing speaker, and not Windows “doing something wrong.” It’s a byproduct of modern mobile power design, aggressive firmware control, and minimal analog isolation. Dell’s recent designs prioritize efficiency and thinness over the kind of electrical overengineering older workstations had. If you want it gone don’t change power modes during playback, or Lock the system to a single power mode, or Use external audio (USB DAC or Bluetooth), which bypasses the internal analog path entirely.
  15. Dell used to be a systems company. It built machines around customer needs, real workloads, and long-term relationships. You could spec a system, argue about it, push back on pricing, and eventually land on something that made sense for both sides. It was not perfect, but it was rational. That Dell no longer exists. What replaced it is a pricing engine that reacts to market hysteria rather than fundamentals. Overnight RAM hikes. Claims of “losing money” while sitting on years of inventory. Pressure tactics that look less like negotiation and more like ransom notes. That same mentality now shows up in the hardware itself. Recent Dell platforms, including flagship models, have steadily lost modularity. Fewer replaceable components. More proprietary layouts. Tighter coupling between parts that were once serviceable and upgradeable. None of this improves performance. It improves control. For decades, Dell’s strength at the edge was adaptability. Systems could evolve with workloads. Their useful life could be extended. That is how trust was built. Now that trust is being monetized. Momentum earned over decades is being exploited as the rug is slowly pulled out from under customers who are not at the very top of the enterprise stack. Buy what is offered, at the price dictated, and replace it sooner. That is the new model. This is not leadership in edge technology. It is brand liquidation. When modularity disappears, long-term value disappears with it. Hardware stops being an investment and becomes a disposable purchase. At that point, Dell is no longer chosen for engineering or reliability, but for convenience. That is how a systems vendor turns into shelf hardware. Pick it up if it is there. Discard it when it is not. So this is not anger. It is an obituary. Dell had decades of goodwill, engineering credibility, and customer trust. It traded that in for short-term extraction during a bubble. It was good while it lasted. But the Dell that built systems is gone.
  16. This year’s balance sheet and sales projections coming out of Dell have to be some of the worst I’ve seen in decades. I honestly don’t know who over there drank the Kool-Aid, but the disconnect from reality is getting absurd. Dell pushed another round of price increases at the start of this week. This time it wasn’t just servers. It hit the consumer side as well. Because our order includes consumer-grade components, we got nailed with roughly a 30% jump on desktops and laptops, right after they already hiked server pricing a few weeks ago. To put numbers on it: a very basic configuration, i9-285, 16GB DDR5, 512GB NVMe, which was already overpriced at $1218, is now sitting around $1500. And they’re acting like this is some kind of favor due to “market conditions.” Meanwhile, HP is selling the same class machine on their public website, no bulk discounts, no backroom pricing, no end-of-year promos, with higher specs, i9-285, 32GB DDR5, 512GB NVMe, for $1299. That’s what any random person sees when they go online and compare systems. So Dell’s “heavily discounted” bulk pricing is now more expensive than HP’s public retail pricing with better hardware. Let that sink in. This isn’t about RAM shortages anymore. This is pricing discipline gone off the rails, and it’s happening at the worst possible time. If this is what Dell thinks the market will tolerate going into next year, they’re in for a rude awakening.
  17. One thing that is not being talked about enough is how much actual technology progress may end up stalled because of this. The AI boom is locking in massive, multi year capital commitments based on today’s hardware and today’s software assumptions. Billions are being committed now, but supply takes time. By the time much of this equipment is delivered, it will already represent a frozen generation. That creates an incentive problem. If NVIDIA advances the architecture too aggressively while these contracts are still being fulfilled, it risks legal exposure from customers who just spent enormous money on hardware that was positioned as long term viable. If it does not advance fast enough, it risks falling behind competitors and alternative architectures. The safest path, from a legal and financial standpoint, is to slow real architectural change while extracting as much value as possible from the current generation. That is not how technology normally progresses. Historically, hardware moves forward because the next thing makes the previous one clearly obsolete. In this cycle, progress is constrained by the need to protect sunk costs and contractual commitments tied to an artificial scarcity model. What makes this worse is that the hardware being purchased now is tightly coupled to the current non coherent memory and CUDA centric software stack. If a materially better memory model arrives in the next few years (like CXL), large portions of today’s AI infrastructure could become inefficient overnight. That puts vendors in a bind. Advance too fast and you anger your biggest customers. Advance too slowly and you turn the boom into a dead end. Either way, there is a real risk that we are not just inflating prices, but also delaying the next meaningful architectural step, because the money is already committed to preserving the current one. That may end up being the most expensive part of this cycle.
  18. the first AI bubble domino topple imminent -> ORACLE
  19. Most of the hype here isn’t driven by fundamentals. It’s emotional reactions to buzzwords and shallow usage. People assume that because they can open ChatGPT on their phone, they somehow understand AI. That’s the same herd mentality behavior that helped inflate the dot-com bubble. The reactions in this thread didn’t form on their own. They’re a direct result of social media amplification and the fact that anyone can now gain visibility and perceived authority almost instantly. Back in the dot-com era, hype was pushed by analysts, executives, and traditional media. Today it’s pushed by algorithms that reward confidence, oversimplification, and spectacle. A viral LinkedIn post or YouTube short claiming “AI will replace everything” carries more influence than balance sheets, margins, or real deployment costs. That’s why surface-level use gets mistaken for expertise. Access is confused with understanding. Using a smartphone, installing apps, or typing prompts doesn’t mean understanding model limits, infrastructure costs, power requirements, or long term ROI. Social platforms collapse all of that nuance. This is classic bubble behavior - success stories spread instantly - skepticism gets labeled as anti-progress - valuations drift away from fundamentals as perception outruns reality AI clearly has real and useful applications. But the widespread certainty that it will effortlessly do everything while printing money forever is being reinforced at scale. That’s why this feels worse than the dot-com bubble. The hype engine is faster, louder, and global, while economic gravity hasn’t changed.
  20. For additional context, this quote did not come out of a single email or a casual exchange. This was after two full days of back and forth where I refused to accept a 100 percent overnight RAM increase on an active enterprise order. I escalated the issue beyond account management to regional and global executives and made it clear I was not going to pay what amounted to ransomware pricing. After that escalation, I pulled the entire invoice off the table. Several hundred thousand dollars in equipment was withdrawn from the deal. that's when I receive the email response I am quoting below from Dell’s regional sales manager. At that point the position was clear. This was not a negotiation and not a supply issue. It was a decision. Dell chose to hold pricing rather than keep the business, and as a result they lost the entire invoice to a competing vendor. That is why the explanation does not hold up. Dell did not suddenly incur double costs overnight. There was no emergency restock at inflated pricing. This was about margin protection and inventory allocation, not cost recovery. When vendors are willing to walk away from large enterprise orders rather than adjust pricing, it tells you where the incentives really are. Traditional customers are no longer the priority when the same hardware can be redirected into higher-margin channels tied to AI demand. That is not a RAM shortage. That is price steering. And this is exactly why buyers need to push back. Once customers accept this as normal, it stops being a market distortion and becomes policy.
  21. The AI hardware market isn’t about raw compute anymore. What’s really happening is that it’s being exploited because of a memory problem that’s been baked into x86 PCs and servers for decades and never fixed. Here’s the core of it. CPU and GPU memory don’t talk to each other properly. They sit in completely separate pools, and any time the GPU needs data from system RAM, it has to copy it over the PCIe bus, sync it, process it, then copy it back. This isn’t really a bandwidth problem. It’s latency, coordination, and a mess of software complexity, and it gets worse as AI models get bigger. That’s why VRAM has become a hard gate for anyone trying to run AI locally. If your model doesn’t fit inside the GPU’s memory, you are basically dead in the water, even while massive chunks of system RAM sit idle. Apple Silicon shows this isn’t a law of physics. With unified, coherent memory, CPU and GPU work on the same data without copies. Memory isn’t owned by one device. It is shared. Software gets simpler, latency drops, and efficiency jumps. The only limit is how much memory the product ships with, not the architecture itself. This is exactly where the industry should be heading and where CXL comes in. CXL isn’t a faster PCIe bus. It is a coherency protocol that lets CPUs, GPUs, and memory expanders share the same memory pool. Instead of forcing GPUs to hoard memory locally, you can treat system RAM as a shared resource. Models don’t need to be duplicated per GPU anymore, and scaling becomes a matter of adding compute, not copying memory. It doesn’t magically make DDR as fast as HBM, and latency doesn’t disappear. But it removes the need to constantly move data around just to make accelerators work at all. This is also why CUDA exists. CUDA thrives because it gives developers control over isolated memory domains, which is exactly what current hardware forces you to do. CUDA didn’t create the problem. It just optimized around it. But the moment memory becomes coherent and shared, a lot of CUDA’s “must-have” control starts to matter less. You move from orchestrating memory to scheduling compute, and suddenly the advantage of isolated memory shrinks. NVIDIA knows this, and their strategy locks the status quo in place. They cannot offer unified system memory on x86 today, so instead they pack GPUs with ever-larger VRAM. That is not luxury. It is a workaround. NVLink exists for multi-GPU coherence, but it is mostly limited to servers. High-end workstations ship massive VRAM but no real way to pool it coherently. You are forced to manage memory manually or pay an enormous premium for server-grade gear. It is not a mistake. It is a product boundary. The result is clear. The AI boom is built on duplicated memory, forced copies, and inflated costs. NVIDIA isn’t just winning because of fast GPUs or CUDA. They have built an ecosystem around an architectural flaw. CXL is the first step to fixing it. It will not flip the market overnight, but coherent memory as a first-class system resource will eventually shift the balance. Control over capacity matters less, efficiency of compute matters more. Right now, the industry is paying a premium for an architectural flaw that NVIDIA has learned to monetize with surgical precision. That is the real NVIDIA tax.
  22. The mess we’re seeing in RAM pricing didn’t come out of nowhere. It’s the end result of a chain reaction that started with two companies most people have never heard of. The Rabbit Hole begins in Spruce Pine, North Carolina, where Sibelco and The Quartz Corp. run the mines that produce almost all of the ultra-pure quartz used to make the crucibles for growing silicon ingots. This material is so clean and rare that the entire semiconductor supply chain depends on it. No quartz means no crucibles. No crucibles means no wafers. No wafers means no chips. It is one of the most fragile choke points in modern technology, and nobody paid attention to it until Hurricane Helene hit in September 2024. Power went out, roads were flooded, and both mines were shut down for weeks. That alone was enough to push every major wafer supplier into allocation almost immediately. Shin-Etsu, SUMCO, GlobalWafers, Siltronic; all of them tightened supply heading into 2025 because their raw material pipeline had stalled. Once the upstream pressure hit the wafer producers, the downstream consequences landed in the laps of Samsung, SK Hynix, and Micron. These three control roughly ninety-five percent of the world’s DRAM output and effectively the entire supply of HBM. Their order books were already stressed, and the gap between normal PC demand and AI demand turned into a canyon. Consumer DDR5 grows at a predictable pace. AI customers are throwing money at HBM3 and HBM4 and are willing to pay five to ten times the margin of desktop memory. Faced with that imbalance, the big three did what they always do when margins are skewed. They shifted most of their advanced DRAM capacity toward HBM and server grade DDR5. The consumer market was left with scraps. Prices didn’t rise by accident; the shortage was engineered by capacity decisions. December contract pricing jumped nearly 80 to 100 percent in a single month. Retail followed just as fast. Kits that cost around one hundred dollars in midsummer now sit closer to two hundred fifty and climbing. Even DDR4 is riding the same wave as older lines get repurposed or shut down. The ripple effect is already hitting system builders. Prebuilt vendors have announced fifteen to twenty-five percent price increases and directly named memory costs as the reason. And this isn’t the ceiling. Current projections show constrained supply stretching well into late 2027. New fabs take years to build and qualify. Meanwhile AI demand refuses to slow. Boiled down, the Spruce Pine shutdown was the trigger, but the runaway market we’re seeing now is the result of Samsung, SK Hynix, and Micron chasing AI margins and letting the consumer channel absorb the damage. It mirrors the seventies oil shock, except the “OPEC” here is three semiconductor giants who don’t need to hide their strategy. RAM has become digital oil, and the price at the pump just doubled. If you plan to upgrade, do it now, because this trend isn’t turning around anytime soon.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use