Jump to content
NotebookTalk

Etern4l

Member
  • Posts

    1,876
  • Joined

  • Days Won

    13

Everything posted by Etern4l

  1. If I were an adversarial foreign power, those are the kind of videos I would try to encourage if not sponsor (looks like wouldn't be too expensive either). Divide and conquer. Unsubscribing from the thread on that basis alone.
  2. Yep, but basically it just spews out the most probable answers (in some obviously non-trivial sense), given the corpus of Internet material it's been trained on. It tends to be better than googling especially on trivia or if you are just looking for a quick and rough overview. Tends to breaks down on deep dives. Yeah, it's hallucinating a lot on this Linux stuff - asked it a few Qs about Gnome, and got wrong responses each time (mostly referring to old versions of Gnome, even though I specified the version of Gnome I care about).
  3. Will read through sometime this weekend, however, for now I will just post what ChatGPT had to say :) Advice #5 is an example of the algo being dumb and making things up - there is no setting of the kind in Tweaks under Fonts, and what would it have to do with Fonts and scaling in the first place. But sometimes, it comes up with something useful. I have no view on 1-4. It seems that you have encountered several issues while trying to set up and use Linux as a daily driver on your Dell Precision 7770. Here are some suggestions to address some of the issues you mentioned: Regarding the refresh rate: Check if you have the appropriate driver for your display panel installed. You can also try running xrandr command in a terminal to see if it lists the 120Hz option. If it does, you can set the refresh rate using xrandr. To disable automatic brightness at the login screen, you can try adding the following line to /etc/gdm3/Init/Default file: xbacklight -set 100. The fingerprint reader on Dell Precision 7770 is currently not supported by Linux, so you won't be able to use it for login authentication. To run a specific app without scaling, you can try using the "Scale Factor" option under "Display" settings in GNOME. Set the scale factor to 1 for the app you want to run without scaling. To change the file size reporting unit to binary (i.e., 1024-based) units, you can try installing the gnome-tweaks package and using it to set the unit to binary under the "Fonts & Scaling" tab. I hope these suggestions help you in addressing some of the issues you encountered.
  4. How do you know that the usage reported by this 3rd party modified system is real, and that the resources are spent on gaming rather than on some other tasks (in case your PC unfortunately became a part of a botnet, say)? The fact that you noticed a significant discrepancy in resource utilisation is somewhat strange. It should be the same system underneath, and idle Windows 11 uses 1% of CPU or so. One particularly bad outcome here would be that the PC ends up getting involved in some illegal activity and you end up with the FBI politely knocking at your door to see if everything is OK.
  5. Our, and safe to assume every other organism's, primary instinct is self-preservation - preservation of own code (in this case genetic), to be precise. Rather fanciful to assume AGI would be different and work hard to preserve humans instead. I'm sure it could happen for a while, but then I wouldn't want me and my family to be around when the thing finally gets unshackled (either by us: we can be malign and/or stupid, or when it figures it out itself). Thanks to the aforementioned illustrious industry leaders and billionaires, we may not have a choice. Think about our treatment of animals.. what goes around comes around, unfortunately for us.
  6. Here is an expert to clarify: we aren't in the general sense, although some specific cancers are indeed curable if caught early enough. Study reveals cancer’s ‘infinite’ ability to evolve BTW earlier references to human intelligence were made in the average sense, doesn't necessarily apply to scientists. IQ scores in the US have fallen for the first time in decades, study suggests I couldn't find the thread you mentioned, I suggest let's move any further discussion on the longevity topic there, feel free to post and tag me so I know where it is.
  7. Oh boy, I've been following the topic fairly closely. At the moment wishful thinking doesn't even begin to cover this. For starters, we are nowhere near curing cancer. Would AGI help? Maybe it would, maybe it wouldn't. But, why would it? How would you force it to do it? Is the problem solvable at all? How expensive would the "immortality treatment" be? Would anyone apart from billionaires be able to afford it? How would we control Earth's population? Sorry bro. A bit off-topic BTW.
  8. You weren't alone. Most people with some knowledge of the subject thought this was many years if not decades away. In 2016 Google's AlphaGo beat the world champion Lee Sedol in Go, a game previously regarded as basically intractable at human level by AI. Now Google themselves got kind of caught with their pants down. A lot of "impossible/intractable/maybe in 20-50 years" things have been happening in the recent years (the funny thing is: most of it was enabled by gamers and Nvidia), so we really need to try and extrapolate. For most people that's basically impossible given the specialist knowledge required. People can just listen to what the industry leaders have to say (e.g. the interview with Altman), but that will necessarily be a carefully filtered marketing pitch delivered with rose-tinted glasses on. As for AGI, people have previously thought, again: impossible - needs a soul, requires quantum computing, a century away, requires 100 MW of power etc. Well, I don't think too many people would be repeating those projections now, and if those wishful estimates are proven wrong I think we are finished, like gorillas looking at those weird leaner apes running around with sharp sticks and making curious - yet impossible to make sense of - sounds.
  9. Thanks for the responses. A lot to go through here, I will try to be brief... The fact that the adversarial AI scenario has been featured prominently (but not exclusively, e.g. Asimov's sweet and naive "Bicentennial man") and literature is completely irrelevant. It's just like saying "nuclear threat? Pfft, there've been so many movies about that, it's overhyped". First of all, the AI of most concern (I didn't want to muddy the waters, but the industry terms this "AGI" - Artificial General Intelligence), would not generally have to rely on a more or less human-curated dataset to learn from. But, even if we incorrectly assume that such a dataset must be involved, two questions remain: 1. Would it be possible for someone to feed the AI an adversity-promoting dataset? The answer is: of course. 2. Even if we take utmost care in what data we feed to the AI, is it possible that even a moderately intelligent AI would come to pretty obvious conclusions such as: * Most humans are not very smart, hence of somewhat limited utility * The Earth is overpopulated * Armed humans are a particular threat * Humans are territorial and aggressive * Humans don't really care enough about the environment and could well destroy the planet * Humans use a lot of resources etc. The above are pretty much statements of fact, and should an AI be in a position to implement a remediating policy, we would be in trouble. At that point it would be way too late, clearly you are downplaying the risk for some reason. Again, where do I start here... ChatGPT and the underlying technology are much more than "chat bots" - there is a risk a sentient AI might take our calling its ancestor ChatGPT that as an insult. As for the immediate impact, I answered that earlier. Basically, you now have machines which can communicate on a human level (I'm sure this could have passed the Turing test, if they trained it to do so - which they wouldn't let us know about for obvious PR reasons: downplaying the risk is the name of the game), which can generate images and art at a level where experts have to argue its merits. It can generate computer code, write essays, pass MBA exams, pass bar exams etc. It's an insane breakthrough, pretty much nobody thought was possible a year or two ago. This will deeply impact at least 50% of all jobs. Hope it's clear why. We could stop here, but this is just a tip of the iceberg. Clearly light sci-fi is becoming reality before our very eyes (HAL-9000 is around the corner), using the current technology. The problem is: what if the technology progresses any further? And of course, as things stand it will: we are not too far off from actually being able to build your Skynet and a basic terminator (and yet again, to focus just on this one scenario would be a grave mistake). It's not so much about the particular order, but the fact that AI is easily in top 5 if we are being honest. Why are we engineering another gigantic problem for our civilisation? The answer is simple: because, as a whole, we are fairly greedy and stupid. That may be true of laymen's understanding (which is very dangerous, although understandable given the novelty and complexity), but hundreds of AI researchers recently signed an open letter calling for a "pause and think" on advanced AI research. I mentioned the likely concrete impacts a few times earlier in the thread. Overdelegation is the basic scenario, rehashed in the popular culture since the 1980s ("War games" for instance). But even in your example, you are making a couple of naive (given our current vantage point already) assumptions, e.g.: 1. An advanced AI wouldn't be able to take control by force or deceit (wouldn't be too much of stretch to image an incident not in a Wuhan bio lab, but AI lab, of course could happen Stateside or anywhere else) 2. The AI wouldn't be able to use 1. to control any long distance weapons ... Basic as this scenario may be, it could well have fatal consequences, and we have no way to absolutely preventing it from happening other than by really banning any use of advanced AI in military decision making, and harshly enforcing this globally. Yes, but that would have been a pure nuclear/basic automation threat. The reason why the system was not fully automated in the 1980s is because the system wasn't sophisticated enough (unlike the fictional one depicted in "War games"). No longer the case, hence we are dealing with a whole new risk (or one grave facet of a new risk). Clearly this problem would not exist in absence of an advanced AI, hence it's an AI risk first, nuclear second. That said, it's likely that any adverse effects of AI will be multi modal. Perhaps the most immediate issues is the risk of AI triggering a global economic collapse and or a nuclear war, this way or another. Yes, there are short-term benefits, which - as always - are propelling the machinery at the moment. There is very little to no long-term planning built into capitalism. That said, I would say the vast majority of those gains can be realized with the technology we have already. No need to go any further and risk everything just so Elon Musk, Bill Gates, Sergey Brin and Mark Zuckerberg can become the first (and last) human trillionaires on the planet.
  10. They are just trying to catch up to CSGO (except perhaps the console USB device detection thing, since there is no CSGO on consoles: too fast and complicated never heard of anyone playing CSGO on a console lol). Would be interesting to know how effective those measures are, especially if they utilise weak-a... warnings, instead of no-warning bans. They are touting the ability to record and replay top-tier games, something CSGO has had from the start for all games, and more seasoned reputable players get to review the games as well. They are saying "a team" will be reviewing gameplay videos. LOL, good luck policing a 1M++ player base that way. Despite all those advanced automated and human game review measures, CSGO was still infested with cheaters, both on Valve servers and on eSports "enhanced security" platforms like FaceIt (requiring a low-level anticheat loaded with Windows). It's just the stakes weren't high enough. If a cheater gets banned, they just create another account and spend $20 on another game licence. Consoles are not immune either, especially older ones get heavily hacked by cheaters, we are not even talking about kb+m mods, which don't really do that much. Some people are just a bit stuck on the left side of the bell curve (which is probably not their fault, but still), and consequently feel like there is no choice but to cheat in order to have any fun at all, never mind the fact they spoil the game for everyone else. Others simply cannot compete on their desired (sometimes very high) level - includes some "celebrities" such as streamers and "pros", and if there is a technical edge to gain, they will go for it, especially if it's not strictly illegal, as in the police won't ram their door at dawn for it. Lastly, some people just enjoy cheating and scr..ing other people, they probably wouldn't even bother playing in a legit way.
  11. I've tried two different noname pads from ebay, one seemed a bit better than the other, both were really good. I also ordered the 7958 paste from ebuy7, received as advertised via Yuyun Express, haven't tried yet.
  12. GPT-4 (the successor of the current ChatGPT) not just passed but aced the bar exam. Will be interesting to see how the legal profession responds to this. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233 Generally, should things be allowed to continue along the current trajectory, in the medium term any intellectual profession will be either deeply affected or toast, resulting in an economic upheaval on a previously unseen scale. On a slightly longer timeline (to allow for better integration on the robotics side), human workforce will become mostly redundant. Clearly, at that point capitalist societies would collapse. All of this without taking into account any adversarial aspects of the technology; however, at that point a logical thing to do would be to just wipe the redundant protesting (dangerous, particularly in America!) humans out, either autonomously or by the order of the ruling billionaires (mindful of the lessons from the French Revolution).
  13. Agreed, just curious about a ballpark quantity. I doubt Nvidia discloses this though.
  14. What should worry you more than plutonium-241 in smoke detectors is who offers better arguments and makes a better conversation: you or ChatGPT... Generally, points taken sir/ma'am: nothing to worry about, it is what it is etc. Fair enough, probably what most people are thinking at this time.
  15. No, melts at around 50C and works great on the IHS. That said, some variance in performance has been reported, with the branded stuff from ebuy7 apparently working best.
  16. Just a little joke, don't worry about the movie. That's what ChatGPT said! If it's right, and it probably is (unfortunately, I have to reveal I don't have much faith in humanity and our collective intelligence) - then we are done for. However, ChatGPT being both flawed and biased, nothing is inevitable at this point, even though our odds look really poor with both the US capitalists and Chinese communists hard at work in the same fatal direction. I find the discovery of fire to be a poor analogy. Fire didn't start thinking for us, and eventually for itself.
  17. But it is: https://m.imdb.com/title/tt0093692/ ;) Basically, what you are saying is: "whatever happens, happens, let's not worry about it." Hmm, what could go wrong...
  18. Of course, although looking at the way you wrote that, I haven't watched the movie. Ever heard the expression "boiling frog"?
  19. I think we can look at the evolution and history of the Earth as valuable references. It's not the first time that more intelligent species/entities started to emerge in the history of our planet.
  20. There is a lot to unpack there already, I will respond in full later, but for now I would encourage anyone who hasn't seen it go through the Sam Altman interview video, I updated the link - not sure why it was showing up as a thumbnail, as it will give you more of a marketing (but occasionally brutally honest) view into the upcoming capabilities and consequences of AI. My favourite quote is: "The marginal value of intelligence will go to zero". There is also another interview with Satya Nadella, that's too cringy to repost I'm afraid, where he says something like "The society, for some reason, has decided to assign more value to software developers, than say to care workers, and this will change.".
  21. I had to recreate the post to massage the title, couldn't remove the original expanded link from the phone. Removed the dead link, sorry. The first one worked, here goes again:
  22. Let's cut to the chase: here is ChatGPT's own honest response when asked about top dangers to human civilization in the next 50 years, when forced to order by decreasing threat severity (it doesn't do that by default, tries to muddy the water listing global warming first - which is a threat but a slow-burning one; manipulative from the outset): Pandemics: The spread of highly infectious diseases could have immediate and catastrophic consequences for human civilization, causing widespread illness, death, and economic disruption. The ongoing COVID-19 pandemic has demonstrated the devastating impact of pandemics on global health and the economy. Global economic collapse: Economic collapses can cause social unrest, food and water shortages, and destabilize governments. The global economy is interconnected, and an economic crisis in one region can quickly spread to others. Artificial intelligence: The development of advanced artificial intelligence could pose risks to human civilization if not managed properly, such as the possibility of AI systems becoming uncontrollable or using their intelligence to harm humans. Nuclear war: The use of nuclear weapons in a war could have catastrophic consequences for human civilization, including widespread destruction of cities, environmental damage, and loss of life. Natural disasters: Natural disasters such as earthquakes, hurricanes, and tsunamis can cause widespread destruction, loss of life, and economic damage. Climate change could exacerbate the frequency and severity of natural disasters. Climate change: While climate change is a slow process, its effects could have catastrophic consequences over time, including rising sea levels, more frequent and severe weather events, food and water shortages, and displacement of populations. However, it is important to note that the impact of climate change could be mitigated through immediate and sustained efforts to reduce greenhouse gas emissions and adapt to changing conditions. So, the threat of AI is more severe than that posed of nuclear weapons, with the #2 threat being a global economic collapse, something in OpenAI CEO's own words (see the whole video below) AI is likely to cause relatively soon by way of making a large proportion of human workforce redundant. It also caveats the AI threat with "if managed properly", even though it's pretty obvious we cannot contain a super-intelligence for long, well, because it will be so much smarter than us it will necessarily figure out how to break free without our even knowing what's going on (The "Ex machina" portrays this basic point nicely). We can generally contain horses, cows, pigs and chimpanzees, and even that not with 100% reliability. To be fair, ChatGPT does a good job explaining why AI poses a more severe threat than nuclear weapons: Scale of impact: While nuclear weapons can cause catastrophic damage, the impact is usually limited to a specific geographic area. In contrast, AI has the potential to impact virtually all aspects of human life, from economic systems to healthcare, transportation, and social interactions. Unpredictability: AI systems can rapidly evolve and become unpredictable, making it difficult to anticipate and mitigate their impact. In contrast, nuclear weapons are relatively stable and predictable, and their effects can be modeled and simulated. Autonomous decision-making: As AI systems become more advanced, they may be able to make decisions and take actions without human input or control. This could lead to unintended consequences or ethical dilemmas. Proliferation: The development and deployment of AI technology is occurring rapidly and globally, and it may be difficult to control the spread of the technology and prevent its misuse or abuse. Overall, while the impact of nuclear weapons is well-understood and regulated, the potential impact of advanced AI technology is less clear and may be difficult to control or predict. This is why some people may view AI as a more severe threat than nuclear weapons. There is an interview with the sociopath OpenAI CEO Sam Altman, where he basically admits his company's employees tend to wonder whether it makes sense to have children anymore: Basically, the guy is one of the major architects of our own demise. ChatGPT fails to answer questions on what people can do to stop the progress of AI (it keeps claiming resistance is futile, AI cannot be stopped - no kidding), so we will have to resort to our own intelligence and common sense: 1. Cut off any companies involved in advanced AI from funding, stop using services, stop buying products, divest. The list would be long, but predominantly we are talking about big tech - here are the primary culprits: * Microsoft * Google * Meta/Facebook * Elon Musk / X Corp / Tesla * Amazon 2. Raise the issue with your political representative, demand strong regulatory action / ban 3. Economically boycott any countries which engage in significant AI development at the state level, PRC is the primary culprit here Hopefully people will come to their senses and a military action will not be needed to enforce compliance (if we are smart enough to realize that's what may be required). Humans generally suffer from a cognitive bias whereby we avoid thinking about unpleasant/negative future events and outcomes (and thus tend to procrastinate on addressing the underlying issues), but let's discuss if we can.
  23. As for the 4090 pricing, I wonder what GeForce 4090 chip volume NVidia has sold to date. Clearly, there is a level competition for this level of silicon with the industry, for example, here is two-faced Elon complaining about the dangers of AI and amassing GPUs for his own AI project at the same time: https://arstechnica.com/information-technology/2023/04/elon-musk-reportedly-purchases-thousands-of-gpus-for-generative-ai-project-at-twitter/
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use