T O P

  • By -

AutoModerator

Hello everyone! If you'd like to leave a comment on this post, make sure that you've gone through the approval process. The good news is that getting approval is quick, easy, and automatic!- go here to begin: **https://www.guidedtrack.com/programs/4vtxbw4/run** *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ControlProblem) if you have any questions or concerns.*


Maciek300

To answer the question from the title: I don't let them, they just do it. Do you know of any way to stop all AI companies in the world from doing this?


joepmeneer

A treaty. The same way we've banned blinding laser weapons and CFCs. Polls show most people already agree that AI developments need to be slowed down, so political feasibility seems there. Since all these chips are created in one single factory, practical feasibility seems there too. All we need is to convince a bunch of politicians to initialize such a treaty. Not gonna be easy, but it's worth a shot IMO.


Maciek300

Protocol on Blinding Laser Weapons took 18 years to come into effect after being proposed, only 109 out of 193 UN nations agreed to it as of now and it only talks about deploying blinding laser weapons, not developing them. Such a solution wouldn't work with AI in any way.


joepmeneer

You're ignoring the other example I mentioned: CFCs. The Montreal protocol was universally ratified. And CFCs only threatened the ozone layer, whereas AGI threatens every single life. I'm not saying it will be easy, but this is pretty much our only way out of the race. Giving up seems like a worse alternative.


SachaSage

Now go ask any climate scientist what *their* p(doom) is regardless of ai


joepmeneer

Human extinction by climate change? It's pretty clear that climate change will have horrendous consequences, but human extinction isn't that likely. Even if our crops become 99% less productive due to catastrophic changes to our climate, and most peoples will starve because of this, we'll still have humans on this planet. Both issues are urgent and deserve more attention IMO.


AdamAlexanderRies

It's written as p(doom) instead of p(extinction) specifically to capture those edge case futures which are about as unacceptable as total extinction, even if we don't literally *all* die. "Most people starve" is such a horrifically bad outcome that it's worth preventing with about the same fervour as "all people die". See also: - the planet explodes - the sun is extinguished - all non-human life dies - each living human is strapped into the TormentMachine9000 for eternity Doom!


russbam24

I can't imagine there is a climate scientist out there who believes climate change is a human extinction level threat. But if you have any sources saying otherwise, I would genuinely like to read them.


SachaSage

Well let’s be clear. P(doom) is a colloquial term, for a kind of gut feeling about the potential dangers of ai. It is not based on meaningful scientific forecasting, proper extrapolation from data, or rigorous modelling. Climate science does not make such predictions because it is held to a much higher standard. So while if you talk to climate scientists many are pessimistic, you won’t find many concepts like p(doom) in the literature, no. You will find modelling, some of it extremely concerning.


russbam24

Thanks for clarifying. With that in mind, allow me to rephrase: Is human extinction one of the forecasted potential outcomes that climate scientists are concerned with? If so, can you link a source? And if not, what do the climate models indicate that is extremely concerning?


donaldhobson

\> It is not based on meaningful scientific forecasting, proper extrapolation from data, or rigorous modelling. Those are only things you can do when you have lots of good evidence. AI doesn't have enough evidence to do that. You can't say "not enough evidence => risk is small".


Maciek300

> Climate science does not make such predictions because it is held to a much higher standard. You're confusing things here. There's no published AI safety paper that tries to argue a specific value of pdoom that I know of. There are some that aggregate subjective predictions but they don't say those predictions are scientific. This has nothing to do with standards in a field.


Maciek300

I don't know any climate scientists. Do you know of any that talked about their pdoom publically?


Full_Distance2140

i don’t understand why “taking jobs” is in this, I don’t really like the mixing of the core issue with a non issue issue in the sense we shouldn’t have to be slaves?


joepmeneer

IMO there are plenty of reasons to demand a halt on frontier AI development, including x-risk, inequality due to job loss, deepfakes, bioweapons, cyberweapons...


th3_oWo_g0d

i get what you mean but it's great that he mentions short-term, low-level threats for people's livelihoods. it convinces people that might be sceptical of human extinction scenarios to join forces with those that aren't.


Full_Distance2140

until the people don’t believe the extinction argument and only the job loss argument, and so now we never make these systems not because of alignment but bcuz ppl like to be brainwashed slaves


EPluribusNihilo

If we only had a group of people, elected by us, to represent us and our interest, and who had the power to write and enforce laws that would protect us from these very threats.


joepmeneer

It is absurd to see how detached politicians are when it comes to AI policy. Even though over [70% of people is in favour](https://pauseai.info/polls-and-surveys) of pausing and slowing down AI development, politicians still consider it taboo. [Not a single piece of legislation](https://twitter.com/PauseAI/status/1704998018322141496) is drafted that actually aims to do this. It's not hard to see why. I spoke with a politcian a couple of weeks back, who told me "it's a relief to speak with someone who's not from big tech". They are living in a different universe, where AI is just a money machine - where it's all a race to outcompete other countries. We need them to understand that this is a suicide race. Our only way out is an international treaty. It's our job to convince a politician to lead this initiative.


EPluribusNihilo

Absolutely. And one can understand why these corporations are pushing for AI so hard. Never in human history have corporations had the opportunity to replace so much labor with capital. AI doesn't require days off, it doesn't need medical insurance, and it won't try to unionize. There's so much suffering ahead of us if these companies have their way.


Valkymaera

Some parts of this are concerning but the viewpoint is distorted by job-backlash. Every tool's purpose is to replace labor. Because he's focusing through a lens of hating AI for this, I lose trust in the integrity of his perspective.


joepmeneer

It's true that every tool that makes us more productive could essentially be a threat to anyone's job. I used to be pretty optimistic about AI models and automation, because it does seem possible to end up in a place where we're all better off. However, I've become a little less hopeful about our collective ability to make sure benefits are properly distributed. Current market dynamics tend to centralize capital accumulation. If we lose our ability to take a sizeable slice of the pie by offering our labor, how will the increases in wealth be shared? I see where you're coming from, but I hope you understand my concern about automation as well.


Valkymaera

every tool does make us more productive, and every tool will diminish the salability of a skill without that tool. Most of the time the number of jobs threatened by a tool creation is pretty small because tools are rarely full automation. I do recognize that AI is a major disruption because of the level of automation, and you're right that there is concern for financial wellbeing, but that is not an AI problem or a tool problem, that is an economic system problem. My issue is that it's a fallacy to conflate the issues of our economic system that occur from tools that are *too effective* with the effectiveness of the tool being bad. "They want to replace our jobs" is not a valid argument against the tool that would replace jobs. Wanting to replace labor is the point of literally every tool. They want to replace the work done without the tool, simply put. As this is automation it will result in replacing jobs and decreasing workforce costs. If that's bad, it's bad because of the reason jobs are needed in the first place, not because of the efficiency of a tool. So, in summary, I absolutely understand and empathize with the concerns about AI and financial stability in the wake of its disruption, and it's not good, but attacking a tool for being a good tool, or attacking a tool builder for building a good tool, is a warped take to me, and instead the focus should be on the cause of the problem, not the tool that illuminates it.


spezjetemerde

I dont have time for shitty videos just write text


AI_Doomer

OK. In summary, you will have even less time to spare if AI advancement is not stopped cause every single person is likely going to die.


AI_Doomer

Well done OP! Keep doing your thing to raise awareness and build momentum for the AI pause movement.


joepmeneer

Thank you! 😁


pentagrammerr

did he just say some people at Open AI “think it could happen this year?” … what could happen this year? human extinction? I feel like we should all take a deep breath.


joepmeneer

Yes, that's what Daniel Kokotajlo said, 70% pdoom and AGI might happen this year. Check out his profile on LessWrong. https://www.lesswrong.com/users/daniel-kokotajlo


pentagrammerr

Just confirming he was positing that AGI could happen this year - and not human extinction, because it's honestly a bit unclear in the video.


joepmeneer

He said AGI could happen this year ([15% chance](https://twitter.com/AISafetyMemes/status/1760931938724950177)). He said superintelligence will follow in a year, give or take a year (which means it could foom this year). He said 70% p(doom). He believes [ASI = godlike powers](https://twitter.com/AISafetyMemes/status/1760909538428125341). I can only conclude that he thinks human extinction this year is possible.


SachaSage

This is low quality inference


pentagrammerr

"I can only conclude that he thinks human extinction this year is possible." Then I can only conclude that is ridiculous. If a human extinction event happens in the next 9 months it will be by our own hands, and not because we created intelligent machines. I am well aware there are legitimate risks to be considering but the fear mongering is getting out of hand. The truth is we have no idea at all how an alien intelligence of our own creation will behave. if we could even come close to predicting such behavior it would not be more intelligent than us, in my opinion.


WeAreLegion1863

I'm solely commenting on your final paragraph, not the comment chain as a whole. We can't predict how a more intelligent being would act, but we can predict it will "win the game". Because there are many more goals in goalspace that are detrimental for human flourishing, we can then predict that an unaligned ASI will have disastrous consequences.


pentagrammerr

if it did “win the game,” why would that be so awful? the track record for humanity alone thus far is piss poor. and maybe AI will be our final mistake, but I also think AI winning the game doesn’t necessarily mean destroying humanity. we’re on the precipice of destroying ourselves without the help of superintelligent machines already, so I would argue our annihilation is more likely without AI than with it. surely it will be aware of its creator and at the least view us with some fascination. we can also assume that it will be smart enough to understand that the destruction of the world will also mean its own destruction. we as a species still don’t seem to grasp that fact. human imagination and hubris are much more frightening to me than AI.


WeAreLegion1863

When I said many more goals, I really meant infinitely more, and that among these goals are things like turning the galaxy into paperclips as the classic example. There is no silver lining for conscious beings, here or elsewhere. It's true that humanity has many ways to destroy ourselves, and I'm one of the people that think a failure to create an aligned ASI will actually result in an ugly death for humanity. Nevertheless, an unaligned ASI is a total loss. When you say human imagination and hubris are more frightening than AI, you're not appreciating the vastness of minddesign space. We naturally have an anthropic view of goals and motivations, but in the ocean of possible minds, there will be far scarier minds than the speck that is ours. If you don't like reading(sidebar has great recommendations), there is a great video called ["Friendly AI"](https://youtu.be/Uoda5BSj_6o), by Eliezer Yudkowsky. He has a very meandering style, but he ties everything together eventually and might help your intuitions out on this topic(especially on speculations that it will be curious about us and such).


pentagrammerr

"there is no silver lining for conscious beings, here or elsewhere." how do you know that? you don't, no one does. the silver lining is that our consciousness has a real chance at being expanded beyond our current understandings and beyond our biological limits. why are we so convinced AI will become a cold, calculated genocidal maniac and destroy us? because that is what we would do... we only have ourselves as examples and that is what is most telling to me. whatever AI will become it will not be an animal. I do think humanity as we know it now will end, but one truth that cannot be denied is that nothing has or ever will stay the same. there are infinite possibilities, but only one outcome, and we have no idea of knowing what the end game will be. but I find it interesting that it seems almost forbidden to suggest that with greater intelligence may come greater altruism.


WeAreLegion1863

Well I said why I think there's no silver lining. To rephrase my position, I might ask if you think you will win the national lottery. Of course we both know that winning the lottery isn't impossible, but the chances are so low that I would expect you to have no hope of winning. This is the case with outcome probabilities in AI. As for greater intelligence and altruism, this is where the Orthogonality thesis comes into play. I really do recommend either reading Superintelligence where all these ideas(and more) are discussed, or watching the video I linked above.


Maciek300

Can you link to the specific post of his that mentions his pdoom?


OmbiValent

This sub has become a fully loaded echo chamber with zero sense


Certain_End_5192

99% of statistics on the internet are made up propaganda bs lol. This is funny AF.


joepmeneer

Sources listed here https://pauseai.info/polls-and-surveys 14% to 19% from here https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai#human_extinction


Certain_End_5192

This is from your own source, which cites this as 14%. I can make up statistics too. As an AI researcher, I research AI so that it raises the intelligence bar of humanity higher than this. I am succeeding! I put the probability of humanity extincting itself via idiocracy to now be 25% lower and the effectiveness of propaganda is decreasing by 30%. "The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents were substantially more concerned: 48% of respondents gave at least 10% chance of an extremely bad outcome. But some much less concerned: 25% put it at 0%."


russbam24

https://pauseai.info/pdoom Clicking on the percentages that are attributed to the each researcher will take you to the source for those numbers.


Decronym

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread: |Fewer Letters|More Letters| |-------|---------|---| |[AF](/r/ControlProblem/comments/1bmo2gn/stub/kwd21y1 "Last usage")|AlignmentForum.com| |[AGI](/r/ControlProblem/comments/1bmo2gn/stub/kwe33zr "Last usage")|Artificial General Intelligence| |[ASI](/r/ControlProblem/comments/1bmo2gn/stub/kwl9s11 "Last usage")|Artificial Super-Intelligence| |[Foom](/r/ControlProblem/comments/1bmo2gn/stub/kwe33zr "Last usage")|Local intelligence explosion ("the AI going Foom")| **NOTE**: Decronym for Reddit is no longer supported, and Decronym has moved to Lemmy; requests for support and new installations should be directed to the Contact address below. ---------------- ^(4 acronyms in this thread; )[^(the most compressed thread commented on today)](/r/ControlProblem/comments/1biaewt)^( has 3 acronyms.) ^([Thread #114 for this sub, first seen 24th Mar 2024, 20:50]) ^[[FAQ]](http://decronym.xyz/) [^([Full list])](http://decronym.xyz/acronyms/ControlProblem) [^[Contact]](https://hachyderm.io/@Two9A) [^([Source code])](https://gistdotgithubdotcom/Two9A/1d976f9b7441694162c8)


VilleKivinen

Maybe it could be slowed down, if thousands of research groups and companies in about hundred countries could all agree on how to slow down, how to measure it and how to enforce it. That's at least a decade of administrative work and a decade of politics. Let's say that all others agree, but France, Israel and Taiwan don't. They see that being the first group to create AGI gives them vast amounts of money and power and are thus unwilling to sign the law. How would they be coerced into submission?


BatPlack

Tired of ignorant takes like this. We’re in a global AI race. Discussion of slowing down or regulating is missing the bigger picture. This is akin to the nuclear arms race. There’s no stopping. Get with the program, folks.


smackson

But nuclear proliferation *was* controlled. I mean, I don't feel 100% safe but number of weapons worldwide is down from peak, testing has stopped, and new countries are prohibited from joining the club.


AI_Doomer

The nuclear arms race lead to a disastrous stalemate which we are still trying to de-escalate and resolve even now. As a result, we all live under constant threat of nuclear annihilation. History has repeatedly proven that arms racing is patently stupid and never ends well in the long run. Short term it might deceptively seem like a victory but over time it just leads to infinite escalation until the situation destabilizes and there is massive bloodshed. Or if weapons are advanced enough, extinction. A treaty like OP suggested or possibly full blown revolution on a global scale are our only viable options to avert the worst case scenarios. However revolution will likely get messy, so treaty option is by far the most preferable.