Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,876
  • Joined

  • Days Won

    13

Everything posted by Etern4l

  1. I am on the same page as OP (any many others). There are quite a few reasons to avoid and divest from Google. Those have been rehashed many times, and to some people the rationale will always remain abstract or questionable, but here is my take again: * Yes, they mine huge amounts of data (a googled Android phone sends home 10x more data than an iPhone) and destroy people's privacy under the pretense of being nice - this has huge implications for the society that deserve a gigantic separate thread I'm not going to open for the lack of time alone (see The Social Dilemma on Netflix) * They very quietly but still overtly invest the proceeds in R&D on super-intelligent AIs, effectively working on deprecating people as intelligent species. The co-owner Larry Page gloated about this objective of theirs and supposedly called Elon Musk a "specieist" for questioning that policy. Seriously? * Sundar Pichai made $250M last year lol. Social justice? He is a smart guy but not $250M/year smart, sorry. He hasn't really taken any huge entrepreneurial risks to deserve that kind of reward. That's a few times the comp of Tim Cook BTW. Are they the worst? The answer is clearly "no, Microsoft, and probably Meta are", but that doesn't mean we have to choose the slightly less bad option. Sure, they provide some services, but are any indispensable? Nope, apart from the core search and YT perhaps which can't be accessed anonymously from any system. Of all the FAANGs, Amazon would be the hardest to ditch for me, fortunately they are not really pushing the envelope on AI to the same extent Microsoft, Google and Meta do. As for GrapheneOS, it looks really good, and that's how they make money: https://discuss.grapheneos.org/d/2701-how-graphene-os-makes-money It's also open source, so people work on projects because they want to have this product available, and no corporation will provide it... @Reciever Have you had a chance to try Graphene on your Pixel yet? I say try but that's probably a one way street into custom ROMs. I'm tormented between just taking a lame shortcut into iPhone and this. Arguably the best large phone on the market currently is the Samsung S23U, which (apart from tbe ridiculous price tag) unfortunately has no great degoogling options available, and is even worse than Pixel in terms of dealing with data in stock state. Arrgh.
  2. Clearly Nvidia doesn't give a damn about those pleb gamers who cannot afford a 4090-powered desktop (a 4090 laptop is acceptable, and thus granted a generous 256 bit bus). It's all about cashing in on the AI. If things go the way of crypto mining there (probably not for a while), they will just cut an exec or two and the next guy will issue a statement explaining AI has always been a problematic technology and they never truly believed in it. They will say they love gamers so much they will bring the 256 bit bus back lol, but for now that's reverse science fiction.
  3. This seems to trade off VRAM for compute so ultimately low-end 8GB cards may struggle if they don't have enough GPU capacity to handle this in gaming anyway, especially Turing and Ampere (if the feature is even made available on those architectures). The team used a 4090 for their research, where the technique incurred 2.5-4x higher decompression latency compared to custom HW decompression (depending on the target quality), while using around the same amount of memory as the reference compressed textures (albeit of lower quality)....
  4. Cool, good advice as always. I look forward to bro @electrosoft's report on this.
  5. Yes, I missed the fact that the "scales" are real are not just a cool design painted over lol. Clever. How difficult are those to install on say G.Skill Trident Z5 or Corsair Vengeance? Warranty voided after removing the original heat spreader I guess?
  6. UK watchdog to review AI models like ChatGPT - FT The same regulator just blocked Microsoft's takeover of Activision-Blizzard. Promising.
  7. If this is for air cooling then they look a bit sub-optimal from the performance perspective. Need to have some features which increase the surface area, like grills or something. I would understand the simple block design with holes for WC.
  8. I mean it's not hard to see where things are going. The simple question to answer about the very short term impact is: what percentage of the given job could the current, or a slightly more advanced AI do much faster and at a close to zero cost. The expected job losses in that area roughly correspond to that fraction. Remember, that - as per the Industrial Revolution - the overall global economy will shrink as a result (because of the extra unemployment), so the affected will be unlikely to find alternative employment (on average) in any but the least desirable areas. The process has already started, although it's just the tip of the tip of the iceberg. You are absolutely right, what people are failing to grasp is the size of the snowball, which will grab and smash pretty much the entire workforce: from scientists to plumbers and farm workers (just google agricultural automation if in doubt).
  9. That's all well and fun, but what if creating those daunting Linux scripts was a significant part of the job and a part of the value proposition. If that's effectively automated, what is the added value left? Consider a larger company which employs 10, 100, or 1000 people in a similar role. If let's say their productivity improved by 50% (conservatively), they will reasonably have to consider axing 5, 50 or 500 people, unless there is something else for them to do that AI can't. Moving around might not be easy, as there will be people in other areas in a similar position. I very much doubt the US will be lagging, quite the contrary - it will be at the forefront of the change, being the barebone free market system dominated by large corporations. See the update from IBM above, and consider the layoffs situation in the US tech sector which happened to coincide with the reveal of ChatGPT. AI is often compared to the Industrial Revolution. The interesting thing there is that industrialization initially resulted in a loss of jobs and economic output. Eventually, people managed to use their brains and re-adjust. This situation is fundamentally very different. There won't be anywhere to run, but to lower skill and pay physical work, until humanoid robots arrive that is. Clearly you haven't had a chance to go through all the content and videos, but let me repost a brilliant quote from the Max Tegmark (who unfortunately makes the mistake of trivialising the economic impact, a common theme with the self-interested AI community) interview on page 3: "We're rushing towards a cliff, but the closer we get, the more scenic the views are."
  10. Thanks. Updated the list The system was more watertight on NBR. Possibly the only technical advantage of the old site. Not a biggie, but worth addressing if possible. Alternatively, perhaps firmer enforcement of a reasonable standard of posting might be worth considering, so that even fairly patient users are not forced to use the feature for sanity's sake. Without this, the noise is still there for any new and uninitiated visitors to see, could be off-putting.
  11. I agree so much, I bothered to create a thread on this. Shamelessly plugging.
  12. I am calling BS in their policy, just wanted to entice people to interact with the platform. They have no real incentive to delete real user accounts. In theory they should be identifying and deleting multi-accounts and bots, but again, not clear they want to do that. Content is content, and free speech for bots too (might want to stay on their good side before Starship is ready for the ultimately hopeless evacuation of the select few to Mars). BTW I'm abandoning the use of "likes", upon the somehow only recent realisation that it's one of the key tools social media use to promote unhealthy user dependence on the platform.
  13. One AI reaches that level of capability (luckily it's not remotely there at the moment, it can "only" solve much smaller and simpler problems), we are done as species. We have 5-20 years to save ourselves from being deprecated - according to Geoffrey Hinton, one of the fathers of modern AI. Sounds like a reasonable timeframe to me, although I would bet on the shorter end just to be safe. No doubt we are quickly approaching the most precarious time in our history.
  14. I mean it would be good for someone to make a AAA remake of this, however, failing that if people want to take it to powerful AIs they can do so in real life these days (the primary targets being Microsoft and Google at the moment).
  15. I think the laptop stuff is consistently lagging behind desktops, where we saw similar issues around 12th gen launch, i.e. 3600-4400 CL38 speeds no matter what modules. Now (several BIOS updates later) running at 5200 CL30 on old Hynix dies and a Z690 board. I believe also on the 12th gen but not 100% sure, but 4800 is def solid.
  16. Hello, Would it be possible to close gaps in the ignored users function (in the suggested order of importance): * Reactions from the ignored users shouldn't be visible * Noise from ignored users is still unnecessarily shown in the activity feed * Filter out from Last post thumbnails (a couple of users showing on the majority of threads on the site especially in off-topic) * Ideally we would also be able to filter out threads started by the ignored users Thanks guys. @Hiew
  17. Really not friends with NPR.
  18. Right, all against the backdrop of inflation (the 99% getting poorer, resulting in less disposable income) and consumer PC sales slowing across the board.
  19. Here we go: IBM to pause hiring in plan to replace 7,800 jobs with AI - Bloomberg News Presumably pretty decent jobs - poof! ChatGPT warning knocks $1 billion off market caps of education giants Pearson and Chegg Translation: the rich getting richer by shorting the education sector and thus making an informed bet that AI will take it out.
  20. Star Citizen is basically a scam, no?
  21. Like i said, it's quite complicated. Here is a single-item reading list google may have failed to suggest to you for some reason: https://www.scribd.com/document/452452273/Semiconductor-Reliability-handbook-Renesas-pdf Signing out of this.
  22. LOL. Alright, I give up :) To end of a friendly note, let's just say it's not as simple.
  23. Welcome to non-linearity in physics. When I say risk, I mean risk over typical lifetime. It could be that you are reducing MTBF from 100 to 50 years, you still don't care.
  24. It's not binary. Overclocking may increase the marginal risk of a fault occurring, depending on how much and what kind of additional stress is generated, and how long it is applied for. In some cases, the risk is close to zero, in some cases, it can be significant (e.g. shunt modding a 600W card to 1kW, and running this 24/7 lol)
  25. All I'm saying is that ChatGPT is not the underyling model, the underlying model has no human/programmer rules imposed on it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use