Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,876
  • Joined

  • Days Won

    13

Everything posted by Etern4l

  1. Great to see you're getting on. I don't know how you are feeling about it, but to me the existence of an open system like Linux in today's world is a borderline miracle to be cherished.
  2. Not an existential threat to the whole of humanity, but possibly to artists: Sony World Photography Award 2023: Winner refuses award after revealing AI creation Death by a thousand cuts.
  3. Notebookcheck puts 13980HX at 31K median and 33K max in CB23 Multi https://www.notebookcheck.net/Intel-Core-i9-13980HX-Processor-Benchmarks-and-Specs.675757.0.html so your exploits are very impressive, but I imagine as achievable for the average user as 45K on the 13900K lol, such as some results posted by our own @Papusan Myself, I'm in the humble 40200@<90C club :) Requires about 258W, but this is ultra-stable. I could go quite a bit higher with just "CB23-stable" (which is not very stable at all) settings. Right, that's probably fairly close the max, unless you start artificially chilling the laptop, but then again you aren't even using a cooler which is impressive. Totally depends on the loads, I wouldn't want to run my CB23-grade workloads at 100C all the time (confirmed not to be a great idea on the desktop side, a laptop would melt after a while), so it would be interesting to see how much of a hit there is from capping temps at 90C. Might not be that big of a hit actually, but then again, the CPU is already very efficient at 175W. One thing that would be interesting to know is if any further improvement is achievable with LM. Obviously, the cooler would definitely help as well.
  4. Yeah, the pad is fiddly to apply. Sticking it in the freezer didn't really help, that's why I bought the paste for future applications. The last question in my mind is whether Conductonaut would be able to significantly improve on this.
  5. Not sure whar "the truest sense" means. The surface of the pin doesn't wear out, but either the pin can structurally weaken or the mechanism involved in moving the pins within the socket could experience a problem. Both cases would result in a "bent pin", which nobody really bent specifically. LGA1700 is a different beast too in terms of pin density. As mentioned before, my new mobo variant came with a visually different socket, so it's possible there was a teething issue with this.
  6. Congrats, only about 20% slower than what people have been getting on the desktop side, that's not too bad TBH, considering the severe hardware constraints. I am curious what scores you would get with temp limit at 90C (to make the scenario a bit more realistic for daily use).
  7. They will probably try, but 1. These algorithms are not programmed in the traditional way, so implementing those failsafes is difficult 2. What if there is a problem with one of the failsafes? 3. What if one of the sailsafes is disabled by a malicious, manipulated, or dumb actor? 3. What if an AI figures out how to bypass one of the failsafes?
  8. Annoyed me as well. You can Go to settings->keyboard and remap to get the usual ALt-Tab behaviour..
  9. Like those exploding collars from "The Running Man" etc. Those will be for us bro, once we get locked up in human farms :) A bit more seriously, they try to "align" the AI, so ChatGPT for example has undergone a PC training and it would be hard to get it to produce any non-PC answer. Good luck with the more sophisticated models though.
  10. I can assure you I wasn't bending them on purpose lol. It takes very little for those pins to move/bend/get out of alignment. The exact cause is unclear, I tend to be quite careful. I am assuming the damage was somehow inflicted by slightly uneven pressure exerted by the contact frame. You also may have noticed that the CPU has a bit of play in the socket. I guess it's possible that the CPU being aligned all the way to one side before the heatsink is mounted could affect reliability. Lastly there is the play in the ILM itself...
  11. Yes, the pins are extremely fragile. You must have been careful and/or lucky enough, so carry on as you were and good luck bro. https://www.techpowerup.com/forums/threads/lga1700-broken-pin.292565/ https://linustechtips.com/topic/1470851-lga-1700-socket-with-broken-vcc-pin/ https://www.reddit.com/r/intel/comments/s64hx1/broken_pin_on_lga1700_motherboard/ https://www.reddit.com/r/PcBuild/comments/xvhvtr/bent_pin_on_lga1700_motherboard/ https://forum-en.msi.com/index.php?threads/z690-lga-1700-cpu-socket-damage.371438/ https://www.techpowerup.com/forums/threads/intel-lga-1700-socket-problem.289718/post-4683026 https://www.ebay.com/itm/204306237589 (never mind my experience lol)
  12. Yes, prediction is very difficult, especially if it's about the future, as per the famous quote from Bohr. The multitude of scenarios compounds the challenge, but we can simplify the problem by considering the probability of the almost certain developments, such as AI systems being technically capable of displacing let's say "just" 25% of human workforce (peak unemployment during the devastating Great Depression). Here is my take: 1Y -> Nope. 5Y -> Hmm, not sure anymore. We will see a huge impact by then. 10Y -> Could well happen 20Y -> Will almost surely happen Then what? Do we really believe in fairy tales about infinite abundance for all? Do we actually want life on pitiful dole/universal income (if not effective slavery) for ourselves and our children/grandchildren etc.? What will be the consequences of a gigantic unemployment and economic meltdown this time? BTW this is a relatively benign scenario in the grand scheme of things. The situation is a bit similar to being unjustly convicted by a grand jury composed of shifty billionaires and their minions, then sentenced to death. People tend to have several years on the death row. Do we give up and do nothing, or try to save ourselves? Appeal, hire private investigators, raise public awareness, talk to the press, write a book, failing that try to escape, start a prison riot etc. The good news is that in this case the entire population would be awaiting the same fate (the degree of awareness might vary, but will increase as time progresses), so hopefully safety/power in numbers....
  13. Which is possibly why he had to go, as TechTarget is primarily in the business of selling personal data.
  14. No doubt that if something happens it will be a process. Such regulation would raise the awareness in the US, which as we can see at the moment can be somewhat to severely lacking. The EU is the largest market in the world. Companies found in breach of the regulation could and would be fined if not barred from the market. The issue could be raised bilaterally and at the UN level. There is a lot that can be done, and all of those things would improve our odds to some extent (would need to forfeit those fat checks from the big tech though, ouch). Well, if not then something like this could well play out:
  15. This. Just Super -> type app name .... Edit: @Aaron44126 I believe this covers adding new apps to Gnome https://unix.stackexchange.com/questions/103213/how-can-i-add-an-application-to-the-gnome-application-menu I wish there as an app for that, maybe someone knows? Edit 2: Reading fail, it's at the bottom, installed, looks good Graphical solution is to install MenuLibre It is available for Ubuntu-flavored distributions via apt install menulibre or you can install it from source It allows to categorize apps per Gnome categories and plays well with Chrome apps.
  16. Well, at least the EU are doing something: https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/ https://data.consilium.europa.eu/doc/document/ST-8115-2021-INIT/en/pdf No hard prohibitions or constraints on the development jumped out at me unfortunately, but could have missed something. They do classify systems which could cause potential risks to health and safety of people as high risk, so that could be a useful tool. They are pushing the envelope in the right direction, as they did with GDPR (imperfect as it is), but unless we can castrate outfits working on AGI or abort some early versions, all will be for nought in the long run.
  17. Holy guacamole! Boy am I glad I avoided Asus and their "yebbut the SP ratings" trap. Yeah, it's not great but at least it's modular, i.e. you get to choose which components to download and run. There is a beefy EULA though, which I didn't read lest I would have little choice but to delete Windows along with the MSI software, and it still comes in handy for the occasional testing and benching ;)
  18. I have the most used stuff in the dock, otherwise Super-A shows other applications.
  19. Just looking at the founders' list, what could go have gone wrong... Sold out to the metastatic cancer Micro$oft at the first opportunity. The inoperable tumor Nadella in turn flushed their AI ethics team down the toilet as soon as it became real inconvenient: https://www.platformer.news/p/microsoft-just-laid-off-one-of-its Here is another one: they've been hiring/interviewing contractors in LatAm and Eastern Europe to create coding-related datasets https://futurism.com/the-byte/openai-replace-entry-level-coders-ai Similar story with Google - started off with "Don't be evil" motto, must have helped the initial adoption, soon quietly dropped. Big tech just needs to be reined in real hard now, the question is: who is going to do it? They've got US politicians in their pockets. (Try not to double-post bro, puts extra load on the database I guess) Edit: Good news, OpenAI are hiring lol
  20. Actually, cynicism originally was about being virtuous, although the word has since taken on a pejorative meaning (hope none of my posts came across as such, would be completely unintended), but being hopeful is generally not rational - yet we are wired for that as a heuristic to help us navigate the complexity of the world, to support motivation I suppose. It's also not binary: you can be overly optimistic, overly pessimistic, or "just" as balanced and rational as possible - definitely the hardest and most "computationally demanding" approach. Hope is also a huge - and easily exploitable - pitfall unfortunately. Plenty of striking and very painful examples of that throughout history, including recent and immediate ones we are all aware of (I won't spell out due to the off-thread-topic nature).
  21. Not really a matter of perspective either. According to research, most people just don't even try to think about this and other potentially unpleasant future developments, or indeed try to comfort themselves by focusing on positive outcomes, however unlikely. Don't just take my word for it, have a listen to an UCL and MIT professor of neuroscience: Still, wondering is more than most people do in this case. We need to wonder about and debate AI as much as humanely possible, but (and I hate state interventions as much as the next guy - should definitely be kept to a minimum if possible), ultimately robust regulation is certainly required in this area, sham pretences of self-regulation (at best) won't do. We don't have people and businesses running around playing with nuclear weapons, and we only have to look to financial and crypto sectors to see what the consequences of poor or no regulation in sensitive areas are.
  22. One Ubuntu variant I really liked was Mate. Super light-weight yet aesthetic and customisable. My GUI requirements are otherwise pretty rudimentary, so no idea if it would do all you'd want it to do. If you said A and tried Ubuntu+Gnome, I would say B and try Pop! Based on Ubuntu, and advertised to have the best Nvidia/Optimus integration. This could also help address your last question. This was configurable in Gnome 3, I looked for a bit and gave up. If you wanted this really bad you would probably have to patch. As an alternative, PacManFM seems to come recommended. Just tried it: a bit less pretty, as in less similar to MacOS, but looks lightweight and more configurable, Windows XP kind of feel. Units are base-2 as desired. Good luck and have fun!
  23. Looks absolutely awesome, I had a 240Hz AW gaming monitor for a while. It was legit, perhaps unlike their latest laptops. (In fact it was too good, had to return to manage my addiction lol)
  24. I am not sure that's true, otherwise people would be mostly learning new things to deal with the constant fear - we know that's very far from the typical behaviour :) People just don't care unless suddenly and directly affected. It's the boiling frog scenario. Alternatively, this situation is similar to someone walking into a punji pit, just because they were focusing on what looks like valuable loot on the other side. Most people are just completely unaware of the issue, never mind its complexities, and if they are, they probably get the information from the mainstream media, or some other source of information deeply skewed by the industrial marketing machine or worse (c.f. The Random Thread etc.). The goal is not to accurately predict the future, that's impossible, but rather to reasonably and conservatively enough (given the stakes) assess the risk. What is the spectrum of scenarios? If there is even a 5% chance of AGI wiping us out in the next 100 years, do we want to risk that? What about a 10-50% chance? Say, have you seen "Alien: Covenant" yet? :) People may state they would be interested in a partnership, but what is often meant is "I'd love me some AI slaves". For example, John Carmack (the lead programmer of Quake, in case someone is not aware), who left Meta to start working on an AGI in his garage, gave an interview when he offered a use case: he would like to be able to spawn himself "a couple of Freds" to help him work on projects. The thing about partnership is: there must exist a reasonable balance of power. If one person owns 1% of a company, and the other person 99%, they are not partners - one is the owner, and the other is a tiny minority shareholder. If one partner is 10-100x more intelligent than the other, they are not partners. The smarter one is the brain of the operation, and the other one is the tool - quite the opposite to the relationship people would be rather naively hoping to maintain with AI. I'm not as optimistic on that, primarily because you don't program AIs per se. It's an old idea, harking back to Asimov's three laws of robotics and completely different methods people in AI were looking into at the time. You can train it to death, but there are hardly any absolute guarantees of the expected behaviour. At least as of now, but I expect the problem to become even less tractable in the future. We can forget about a friendly Arnie The Good Terminator coming to our rescue, although I guess if it came to that, that would probably be the last desperate line of defense. Not yet, we probably have a few years left to act. For starters, lawyers of all people to the rescue lol (to be fair, arguably they are in the best position to act, short of a global Human Revolution TM): there is a class action suit against OpenAI/Microsoft over the allegedly illegal use of open source code. Speaking of which, Elon Dearest has abandoned all pretense and is officially jumping on the bandwagon (he was actually already there with Tesla autopilot and his humanoid bot): Elon Musk quietly starts X.AI, a new artificial intelligence company to challenge OpenAI To be fair, he did ring the alarm bell a couple of times, albeit it looks like it's hard to put your billions where your mouth is. Actually, it's more of a concern about the known: what is achievable with the current technology already, what will be achievable soon with the technologies actively being worked on, and what will be achievable in the fairly near future, just a few more steps ahead. Absolutely, and we can already see this unwanted effect of technology in the latest IQ study I posted earlier: all aspects of intelligence down except for visual-spatial processing (3D games?). Well, I wouldn't trivialise it this way. Probably one of the main reasons for caution here is that we don't really understand how our own intelligence works. ChatGPT (GPT 3.5) is no doubt flawed, it's not really capable of autonomously doing involved work, but it's just a PoC/research preview, and already deprecated. GPT-4 improves on it by an order of magnitude in some areas. It doesn't have to be able to solve tomorrow's problems in order to have a huge impact on our species.
  25. What are the usage patters? If displaying static contents for several hours a day (office use), then I would be a bit uneasy about OLED as well, even though I guess we haven't heard any major horror stories lately. If a bit of work here and there plus gaming/videos than OLED in my view.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use