-
Posts
1,896 -
Joined
-
Days Won
13
Content Type
Profiles
Forums
Events
Everything posted by Etern4l
-
Rsspectfully, please don't confuse very simple and short-sighted with clear (this requires certain depth to be meaningful). You haven't presented any substantial arguments, snarky one-liners don't count. Frankly there has been no evidence of any substantial thinking at all in your contributions here thus far.
-
If that's genuinely their stance then great, good for you, although I would be surprised if they said anything else given that the current AI models cannot completely replace all writers. They can automate a lot of writing though. How is writers' pay looking, and how will it evolve going forward is therefore another question. In contrast, you have places like this: https://www.japantimes.co.jp/news/2023/03/15/business/tokyo-startup-chatgpt-job/ FYI I'm also fine at the moment, however, the question here is - can we look a few steps ahead and get in front of the issues before it's too late? LOL. Actually, now that you mentioned it - the demented, the grossly naive, the dumb, the drunk, the stoned and the unreasonably optimistic don't care. An interesting thought I suppose.
-
Reporting for duty :) BTW I'm skipping the whole host of issues already brought about by narrow AI (ANI). There is a Netflix documentary "The Social Dillema" which explains pre-ChatGPT concerns, which is always good since not everyone understands they have been more or less subtly manipulated by ANIs for a while now, and actually this guy Tristan Harris and his Institute for Humane Technologies has some good content on the impact of large language models on social media/online content. This thread is more about the drama which is about to unfold. I appreciate that people not familiar with AI may have a hard time wrapping their heads around this deeply unpleasant material, but the bottom line is that unless we do something about this together, we are toast. Again, the first steps are obvious, and although clearly not trivial to execute, they beat trying to make it burrowed in the woods: 1) Demicrosoft, degoogle, demeta, deamazon and deenvidia (and probably best to deelon just to be safe, despite his helpful rhetoric) - in that order 2) Talk to other people 3) Talk to politicians
-
*Official Benchmark Thread* - Post it here or it didn't happen :D
Etern4l replied to Mr. Fox's topic in Desktop Hardware
The new 48/96GB 5600/5200 kits from Corsair are confirmed to be Micron. The main reason for anyone to consider getting them is they are significantly cheaper than G.Skill, and if you someone is looking to use 4 sticks, then the current IMCs are unlikely to support more than 5200 clock. The question is: will they run on tighter timings, and my worry is that they won't. -
You're missing the point, unfortunately. The sky isn't exactly falling right now, but AI is already causing some serious issues, and it will likely get exponentially worse in the relatively near future. Interestingly, no comment on the Sony World Photography Award 2023 topic... Here is another (super-benign in the grand scheme of things) aspect of the big tech AI abuse: We could all just sit in an armchair, grab a bottle or smoke some reefer, and wait, or try to do something about this. Moving to a cabin in the woods is not really necessary yet, and wouldn't actually accomplish much if we eventually lose control of the planet. It's not like you are going to be able to wait an immortal entity out. I guess the idea would be to stay under the radar, like a monkey in the jungle, or a cockroach, in some part of the planet the AI is less likely to claim and transform for its own purposes. Animals (the lucky free ones) do precisely this already, I guess we'll just join our bio-bros.
-
*Official Benchmark Thread* - Post it here or it didn't happen :D
Etern4l replied to Mr. Fox's topic in Desktop Hardware
Ha, so they are on Newegg. A few days ago there weren't even on G.Skill website. I would prefer to try those over Corsair Vengeance, but how just long would the wait until they make it to the sunlit uplands.... BTW the site has really been lagging lately. -
Qualcomm chip secretly send personal data
Etern4l replied to 6730b's topic in Mobile Devices & Gadgets
Thanks. BTW any thoughts on degoogled phones? GrapheneOS looks interesting. -
Nice comforting strawman, hope you have found it helpful. Have you seen this though: Sony World Photography Award 2023: Winner refuses award after revealing AI creation The thing to consider here is the pace of progress. Where was this 10 (what a preposterous idea lol), 5 (nowhere really), 1 (wait, something's going on) years ago, where is it now, and where will it be in 1,, 2, 5 and 10 years. The following illustrates this exponential progress issue and our flawed and biased (by hope) perception of it nicely: (A relevant slide showing the chart in the correct scale, and a human startled by AI progress at T+delta deleted by site admin) Normally not a fan of those kind of mixes, but looks to the point and perhaps that kind of content is what's needed to get the point across:
-
Unfollowing lol
-
*Official Benchmark Thread* - Post it here or it didn't happen :D
Etern4l replied to Mr. Fox's topic in Desktop Hardware
What's the make/model? -
Has anyone had any experiences with degoogled Android phones? I understand the two main options are Graphene OS and Lineage OS. How well do they work in practice? How do they compare? How much degradation of functionality is to be expected, especially regarding Google Play apps compatibility. BTW on degoogling via iOS
-
*Official Benchmark Thread* - Post it here or it didn't happen :D
Etern4l replied to Mr. Fox's topic in Desktop Hardware
Awesome, let us know how it performs. That's the new Hynix presumably? (Micron also came out with chips powering the new 24/48 GB modules). -
Here is another, more concise, piece with Tegmark. He is still making a mistake by proposing that regulation and control are the solution. They are not, we provably cannot control AGI, however, he does a good job explaining the terminal risks involved. Enjoy!
-
That voltage looks high, but that's OK. CPUs can draw very little power at high voltage. Power draw is a function of both voltage and current, the latter always being regulated based on load. For example, I run my 13900K at fixed voltage (VCORE I believe) and power draw varies from 15 to 260W, clearly depending in the load. Puy another way, nax voltage determines max power draw at full CPU load. Consequently, I wouldn't expect any significant power draw, therefore temps, with just a single core load on a 24 core CPU.
-
Trying to switch from Windows to Linux, ongoing issues thread
Etern4l replied to Aaron44126's topic in Linux / GNU / BSD
Oh dear, annoying. Hope it goes better than for this guy.. -
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371 Only managed to watch in part so far. Generally still very biased, understandable given that AI research is how he makes his living. For example seems to trivialise the economic impact of AI, and claims that AI alignment "seems solvable but very difficult". Well, we don't know if it is solvable in theory, and even if it is - whether we manage to do it before it's too late. Came up with a quotable statement though: "We're rushing towards a cliff, but the closer we get, the more scenic the views are." Overall, a bit weak, but did raise a good few AI red flags, which is obviously much better than the usual intellectual nothing (Yay Hooray AI). BTW He is one of the people who defended the engineer sacked by Google for claiming its AI seems sentient: MIT professor warns Amazon's virtual assistant will be 'dangerous' if it learns how to manipulate users - as he defends Google engineer who said its AI program has feelings Meanwhile EU Lawmakers Call For Summit To Control ‘Very Powerful’ AI A parliamentary committee is debating the 108-page bill and hoping to reach a common position by April 26, according to two sources familiar with the matter.
-
Much less impressive an improvement, +13% to be exact in the case. Not clear why a single core, even running at 6GHz would stress even a laptop to the point of throttling. I don't believe that's what's happening. Optimal utilization of limited thermal solution capacity involves downclocking first and foremost (the marginal cost of additional 1 step in ratios increases sharply at the high end). Prior to that, the scheduler prioritises putting load on the primary logical CPU of each P-Core, then eCores, then the secondary logical CPU (hyper threading). Makes sense as the performance scaling with HT is only about 50%. This results in pretty amazing improvements in MT performance with just a single gen, and the same dated fab process.
-
Single score performance doesn't really scale much anymore, but for multicore Raptor Lake blows Alder Lake out of the water: +33-40% on the desktop side, and seeing similar if not larger improvements with the mobile chips: https://www.notebookcheck.net/Intel-Core-i9-13980HX-Processor-Benchmarks-and-Specs.675757.0.html https://www.notebookcheck.net/Intel-Core-i9-12950HX-Processor-Benchmarks-and-Specs.618743.0.html CBR23 Multi 20K vs 31K median. Amazing.
-
Trying to switch from Windows to Linux, ongoing issues thread
Etern4l replied to Aaron44126's topic in Linux / GNU / BSD
Yeah, I thought KDE would suit you better :) My initial impressions after trying Plasma were very similar. Probably the main reason I'm not using it is that it's using more resources, although I also don't really have a hard requirement for any of the features, and the simplicity of the Gnome experience works. Ideally I would use Mate, as it's basically and even lighter, yet more configurable Gnome - not available for Linux Clear unfortunately. One thing that's lacking is auto-placement of windows in their last positions after login. Need to look for an extension for that I guess. -
I think a thesis is particularly strong if people look at it from different perspectives and come to literally identical conclusions. Microsoft talking about saving energy (which is a concern on the basis of pollution produced alone) is a sad joke, given how much energy they spend on running their AI efforts. Also, one could argue that people are important resources, and working very hard on construction of artificial super-species is hardly conducive to saving us...
-
I saw people (and Linus) comment on the ebuy7 stuff. I only used pads from ebay UK. The one from an UK seller exhibited clear curing behaviour. Then an identically looking pad from China via ebay didn't exhibit such behaviour but is working well enough. I also have a tube of 7958 from ebuy7 wich looks similar to what @cylixposted. Have yet to try it out.
-
*Official Benchmark Thread* - Post it here or it didn't happen :D
Etern4l replied to Mr. Fox's topic in Desktop Hardware
Hmm, what do you guys think about the actual difference between those 2 memory kits? VENGEANCE® 96GB (2x48GB) DDR5 DRAM 5200MHz C38 Memory Kit — Black 5200 38-38-38-84 and VENGEANCE® 96GB (2x48GB) DDR5 DRAM 5600MHz C40 Memory Kit 5600 40-40-40-77 The 5200 JEDEC profile for this, is also 38-38-38-84 They are almost the same price, $10-20 difference. I am not sure I will be able to run them at more than 5200 (my fury beast 5600 barely boots at 5600, and 5400 probably good enough for light benching but unstable in 4x config), but the question is: which one is likely to have better timings at 5200? Is there really any difference (given the almost identical price), or is just some marketing gimmick? Could the 5600 variant be better binned? I understand memory chips to be Micron BTW. The 5200 variant has a bit better availability... Put another way: is it likely that the 5600 SKU is better binned, so will likely run at 5600 with no issues in the future, IMC/mobo permitting, and could end up with better timings at 5200, or is there unlikely to be any difference between the two in practice? My Fury Beast 5600CL40 runs at 5200 30-38-38-38-70 completely stable which is fairly good I guess. Would be disappointing to see a significant regression. I think (but not sure) the RAM is SK Hynix, not Micron though. -
GPT-4: What is it and how does it work? It can accept visual prompts to generate text and, interestingly, scores a lot higher in the uniform bar exam than GPT-3.5 in ChatGPT did (it scored in the top 10% of test makers, while GPT-3.5 scored in the bottom 10%). It's also more creative and has a larger focus on safety and preventing misinformation spread. There are other performance improvements as per the company's research, including improvements in Leetcode, AP level class, and SAT results. So, let's unpack: * More creative, although that's hard to quantify - still, an ongoing full frontal attack on what was previously hoped would be a fairly unique feature of HI (Human Intelligence), besides the Holy Grail of AI: self-awareness * Even more advanced content filtering according to OpenAI's idea of what's information and what's misinformation. * Vastly improved performance on the bar exam and LSAT scores, humans need not apply - a huge step towards making lawyers, and other legal professionals redundant * Significantly improved SAT Math scores, but struggling with introductory-level college stuff, fortunately for mathematicians/engineers (not for very long would be my guess) * A large bump in USABO / medical knowledge exams (medical and biotech professionals/students: start getting ready!) * Codeforces rating upgraded but still really poor - seems like top programmers are kind of further down the line from the AI chopping block (but the sociopaths at OpenAI are doing what they can to change that) * Doing well with college level Art History and Biology All this progress in less than a year from the introduction of GPT 3.5.... For the last century or so, people have been given a social mobility route through their intelligence, hard work and education. Soon AI will not only cut this off for most, but also destroy the livelihoods of those who have historically managed to leverage that opportunity.
-
Trying to switch from Windows to Linux, ongoing issues thread
Etern4l replied to Aaron44126's topic in Linux / GNU / BSD
Haven't had a chance to play around with VMs yet, although KVM seems to come recommended these days on Linux over VMware and VirtualBox.