T O P

  • By -

jamesbond69691

I've no doubt that bad actors are going to abuse the hell out of AI in the coming years. I don't think we're ready for the amount of AI-generated propaganda using AI voices modelled on real people and images that will absolutely fool the (gullible) public, bot networks that provide easy astroturfing capabilities augmented with generative AI, etc.


renboy2

I believe this is the real issue with AI (as it works currently). Soon it would be impossible to know what is true and what is false online, anything from images to clips to articles, on every social network (including Reddit of course) could and would be fabricated content. Russia and China (among some others) are not going to stop or regulate any AI development, and they are also known for deliberately spreading misinformation to sway public opinions about many subjects. The article doesn't complain about this specifically though, and is more worried about far more fictional/future things than what AI (large language models and predictive algorithm) is currently doing.


008Zulu

Those same techniques are going to be employed against them, perhaps even by their very own citizens.


ShlongThong

That's why they have the great fire wall of China.


GermanPayroll

Until their middle class wants out - there’s a reason the physical Great Wall is just a tourist destination now: people always advance around things


[deleted]

[удалено]


Objective-Gur5376

Techies will find a way, China will upgrade, the majority of people won't fight it, but the tech savvy will keep trying to find ways around it. It's a tech arms race at that point


Monsoon710

Underrated comment.


ShlongThong

Thank you.


alphabeticdisorder

Our democracies and open speech platforms leave the West particularly vulnerable in ways other states aren't.


Ka-Shunky

Hopefully it will result in a robust mechanism for getting clear honest media.


techleopard

I'm honestly surprised we haven't even started on AI legislation yet. You know it's coming.


steeldraco

Nah, they're rarely foresighted enough to legislate *before* there's a big problem. And it has to be a big problem for the right people. Expect some spicy AI-generated stuff to show up once the campaign trail really heats up later this year in the US. Then we'll see if the winner's campaign used it or opposed it.


jadrad

Far-right tech oligarchs and the Chinese government already control every major social media platform, and use secret algorithms to push fake news straight into the pockets of billions of people. And last week OpenAi made a deal with the biggest manufacturer of fake news and malignant propaganda in the west (News Corporation, parent company of Fox News) to train ChatGPT with its data. We are fucked.


Jicd

> Far-right tech oligarchs and the Chinese government already control every major social media platform, and use secret algorithms to push fake news straight into the pockets of billions of people. Whether they control social media or not, people need to realize that these authoritarian actors can always just utilize and influence established media. Hobbyists use massive bot networks to manipulate upvotes/likes/follows for a simple dopamine rush. Of course wealthy and politically motivated people are going to do the same on a larger scale for more nefarious goals.


Skellum

> I believe this is the real issue with AI (as it works currently). Soon it would be impossible to know what is true and what is false online, anything from images to clips to articles, on every social network (including Reddit of course) could and would be fabricated content. Which isn't an issue with AI. It's already a current problem. We need UI that enables us to filter out bad actor propoganda.


SweatyBarbarian

Gonna be a lot of money in detecting generated versus real imagery. Then using AI to out the scams immediately, maybe adding on a watermark type of device in genuine video. Gonna be a crazy ride until that happens. Hold those six finger hands way up high.


[deleted]

Already there have been horror stories of people with urges to self-harm getting advice from AI now trained by Reddit posts "solve" their depression by jumping off a bridge. The cat is out of the bag and I'll be very surprised if our nouveau-riche AI leaders will allow anything short of hurtling full-speed into the unknown.


Mego1989

This is already true.


ChiefCuckaFuck

I think its only fair to point out that the good ol U.S. of A is also pretty well known for deliberately spreading misinformation to sway public opinion about many subjects.


chain_letter

Chat bots with reddit accounts that just push propaganda 24/7 in the comments


stickyWithWhiskey

Wait, I thought we were discussing hypotheticals, not just saying stuff that's already been happening for years.


chain_letter

The big difference is you can't tell at a glance if they're a bad person or a robot anymore.


el_tacomonkey

> The big difference is you can't tell at a glance if they're a bad person or a robot anymore. If the post uses correct grammar and appears to have been written by an intelligent, thinking person, then it's 100% a bot.


Vallkyrie

Shit, I'm a synth.


rabidjellybean

He used a comma! Get him!


techleopard

I can't wait for... "Nuclear bombs dropping, LIVE! PANIC!" "Vice President found having sex with a unicorn at a Furry Nazi rally promoting communism!! See the actual video!" "Newest vaccine makes you grow horns! LIVE INTERVIEW!" "Madrid is occupied by aliens!"


Vaperius

One of these actually happened so... /sarcasm


TucuReborn

Ironically, the furries ran out most of the Nazifurs in the past ten years. And the zoos. The communism part is somewhat accurate though, as most are extremely left leaning and there's a lot of socialists in there.


gmishaolem

> Ironically, the furries ran out most of the Nazifurs in the past ten years. This shouldn't be considered ironic: Furries are just a group of people, like gamers or truckers or chefs. Some are the worst people you'll ever meet, and some are the best.


TucuReborn

Fair. I was there when the community as a whole kinda decided they were done with all that around 2016. Prior they had the stance of "we accept everyone, no matter what." Then 2016 happened and I think we can put 2 and 2 together why the largely LGBT+ and left leaning group decided to start removing them.


gmishaolem

I'll be happy when people stop using 'furry' as a punching bag, acting like they're not just another group of people who run the gamut from wholesome to disgusting just like any other group of people in the world. Then again, we haven't even managed to get people to stop using 'gay' as a pejorative, so I guess we have a way to go.


oxero

It's already happening. Fake bot accounts pushing AI generated catered content to people's feeds is already becoming a huge problem, especially on Facebook and Twitter. I see it most effective against the elderly, highly religious, and/or the less tech savvy users currently. For example, a bot will say something like "Amen" a few times in a comment/reply and the unaware human user receiving the affirmative to their religion sees it as a friend. They add each other, follow the AI bot, or even simply engage positively with the bot and get sucked into the echo chamber feeding them whatever messages the controller of the AI wants by abusing fake user engagement. Then the unaware user's entire feed becomes AI generated content. Social media companies are having a harder and harder time measuring what is a bot or real user, but are getting huge paydays from the increased engagement that they don't care to stop it either. My grandfather, who I taught how to use YouTube on the TV a year or so ago, has an entire AI generated playlist that is gaming YouTube's engagement algorithm to be pushed to the most common denominator. Each video is just regurgitated simple facts strung into sentences from something like ChatGPT read in a monotoned voice, and already a few times more than not I've heard it describe things in such a lazy or misrepresented way I cannot stand it. In a few years we're going to be dealing with such a horrendous problem, it's going to widen the cracks of our already frail social fabric between the aware watching people fall into crazy conspiracies of untruths and the unaware being fed nonstop slop from social media.


[deleted]

[удалено]


primenumbersturnmeon

especially with the treasure trove of leaked data out there from the constant breaches. freeze your credit, kids. 


alphabeticdisorder

Our public is fooled by people telling them the things they watched live Jan. 6 on hundreds of independent media outlets, individual social media live feeds, and read about later in court documents did not actually happen. We're doomed.


qualia-assurance

The real issue is that we can't really stop this. There are many countries out there that think the ends justify the means, and their ends are to disrupt American and European societies. We can maybe regulate the people in our own countries to prevent them from doing this. But that is only really protecting other nations from our own manipulation. It does little to protect us from external threats. Worse still. Heavy handed regulation means that we might give these external groups an advantage in when it comes to creating AIs. Where in spending time being cautious of how an AI may be misused they lose a few percent accuracy compared to their unregulated competition. A few percent that compounds over time giving groups who wish to harm our societies the advantage. I think we need to come at this from the notion of mutually assured automation. Where the only way to protect ourselves from other peoples automation is to automate everything ourselves. Otherwise hostile groups may get there first. I agree there are a bunch of reasons to want to regulate AI. I have spent the last decade doing so. But seeing the possibilities in what may be possible between machine learning and LLMs. Then I'm concerned that in worrying too much about protecting writers, artists, and musicians original work. We just let those unregulated nations steal their work instead and damage our own creative potential by excluding AIs from training on contemporary copyrighted material. Russian/Chinese AIs are pumping the entirety of everything in to their models. All you're doing by making us think that AIs can't do particular things is make us more susceptible to their deception.


Horror-Yard-6793

ah yes, im glad the american government does not propagate fake news that benefit them or has ever tried to disrupt other countries (Same for europeans).


qualia-assurance

Exactly. You think you are justified in spreading propaganda that harms other nations. I don't want to live in a world where a person with your mentality is in a position of relative power to me. I have argued to regulate people who might leverage such technologies in my own nation. But I'm not going to be the fifth column for a bunch of autocracies. I can only assume you treat others as you wish to be treat. Good luck with that.


Horror-Yard-6793

you are the one chiming in an AI regulation conversation with "yeah we shouldnt allow bad actors do this to western societies so they cant do the things that these western societies have done in the past and will keep doing" and i literally live in one of those western societies, though I guess maybe you would think you live in a superior western society since its in the south hemisphere?


qualia-assurance

I don't want the best AIs for negative reasons so I can go around China shit posting. I want it for the actual benefits. Automating various industries and helping creatives. I have watched Europe outsource its manufacturing to China - which ironically enough is becoming increasingly automated. And now China is using that industrial base to help arm Russia. A European company sends a product design to China. And then that design gets run as knock off brands that puts the designer out of business. Because they only get revenue off of the original design. Within a year there will be twenty version under illegible brand names. Then as our own indigenous production starts to wane. Then China starts using it's trading power to demand that we let them flood other industries that they have stolen the designs for. See the current situation with Chinese EVs. I don't want that in the case of AI. And you can complain that protectionism is unfair and that we should accept Chinese business and their unfair legal practices taking over Europe. But Chinese protectionism and complete absence of copyright law is what lead us here. If China wants access to our economies. Then they need to start up companies in the EU where EU investors own the majority. They need to use our labour to manufacture them. And we should be allowed to completely ignore Chinese copyright law. Because that is exactly what China does to us. I am so through with playing these silly games of maybe China is a reliable partner. Stop regulating our own businesses. Regulate external businesses. Lets live by the Chinese golden rule.


ChiefCuckaFuck

You sound like a xenophobe.


qualia-assurance

I'm not xenophobic. I'm not even sinophobic. I'm CCPhobic. Proudly so. My country just took in hundreds of thousands of people fleeing Hong Kong because of their autocratic bullshit. Now they're propping up Russia's attempt to annex a country in Europe and threatening Taiwan with the same. They're kidnapping Muslims for the most ridiculous of things in xinjiang, forcing them to learn mandarin, and putting them in to forced labour. They can fuck right off. I have party hats and sparklers ready for the schadenfreude I can't wait to experience about them suffering the consequences of their actions. If you have a problem with that. Then please cry in to a water tight container so you can mail me that tasty beverage.


ostensiblyzero

Ah yes of course the West will only use AI for good, and it’s only those nefarious Chinese and Russians that will pump propaganda. Give me a break.


qualia-assurance

What is it with TrueAnon posters responding to my comments all of sudden? Did I say something to upset you?


ostensiblyzero

No you said something I disagree with.


InevitableAvalanche

Yeah, MAGA already believes lies that are easy to debunk. If these guys are fooled this easily, AI generated content means they will be completely divorced from reality. Don't get me wrong, we are all vulnerable to this...just we are already seeing how easy the right falls for propaganda which has led to Trump...it's only going to get worse.


GriegVeneficus

A.I porn will ruin lives for sure. A.I will also decimate jobs, which is fun. I love A.I, so much potential for mankind! /S


Vladmerius

We're ironically going to go back to only trusting printed newspapers and things broadcast on over the air major networks. 


keithyw

personally, i see where things are going to really hit a head is when AI becomes a substitute for lawyers and politicians. then you'll see these companies face some real repercussions.


BirdybBird

Which will be possible in 100% of cases to debunk by simply relying on established and verified communication channels in conjunction with a technology like blockchain to validate the source of a video. All it will do is actually speed up the proper regulation of social media, which is way overdue. And not because solutions are not feasible, but because everyone, individuals, state, and non-state actors are abusing social media for personal gain. People need to stop screaming that the sky is falling when it comes to AI. We've literally been ringing the alarm bells like this about every society-changing technology since we discovered how to make fire. It's just fear of something we don't understand.


_uckt_

There is in fact a massive amount of focus on 'AI will kill us all' rather than the direct, verifiable real harm the technology is doing today. AI is already being used to select targets for airstrikes in Israel, to produce disinformation on mass, put people out of work, manufacturer non-consensual pornography and numerous other terrible things. But when the dangers of AI are discussed, they are put in terms of 'have you seen Terminator 2'. Tegmark briefly addresses this, that there is harm being done now and we can legislate for both, but when you say there's an existential risk, that's what gets the headline, not that people are being killed by the technology right now. Much like climate change, where there is mensurable harm happening now, where people are actively dying, but we can only talk about human extinction. We need to stop being doomsday profiteers and talk about things that are happening today, to make things real in the present, not discuss theoreticals.


SantasLilHoeHoeHoe

>  AI is already being used to select targets for airstrikes in Israel [From the reports I read,](https://www.972mag.com/lavender-ai-israeli-army-gaza/) this is the terrifying one. There was very little, if any, human confirmation of target selection and the strike selection algorithms were allegedly juiced to either ignore or maximize collateral damage.  FTA, emphasis my own:  >During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with **no requirement to thoroughly check** why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that **human personnel often served only as a “rubber stamp” for the machine’s decisions**, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that **the system makes what are regarded as “errors” in approximately 10 percent of cases**, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all. >Moreover, the Israeli army systematically attacked the **targeted individuals while they were in their homes — usually at night while their whole families were present** — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses.


[deleted]

[удалено]


SantasLilHoeHoeHoe

If China used this same strategy against Taiwan, we would call it war crimes


dagopa6696

10% error rate is a massive improvement over the typical civilian casualty rate in urban combat.


SantasLilHoeHoeHoe

Asymmetrical bombing campaigns are not "urban combat." Stop trying to justify the killing of innocents. 


dagopa6696

You're leaping to conclusions. Intelligence gathering doesn't force anyone to "asymmetrically bomb" things. They can - and do - send ground troops in to these locations. They can - and do - use ground troops to gather the final intelligence necessary before taking further action. But sending in ground troops is a trade-off. Do you think that driving a dozen IDF tanks back and forth across Gaza to get one terrorist won't result in collateral damage? The bombing can be the best way to minimize collateral damage. And yes, it's 100% about urban combat. They're not doing this to farmers plowing the fields.


SantasLilHoeHoeHoe

You did not read the article. These AI systems were used for bombing campaigns in Gaza not the deployment of ground troops.  I expect more thst rubber stamps from those doing the approval of AI genetated targets. I disagree with your callous attitude toward the minimization of civilizian deaths.  There is no justification for bombing someone in their home when their wife and kids are there. Thats a war crime. Full stop. 


dagopa6696

I read the article, and I know it's bullshit. I'm a combat veteran, I've fought in Fallujah and Ramadi and I know how intel actually works. Not only are their strategies justified, the reporting that you're riled up about is extremely misleading. The reporter failed to tell you that it's the same exact approval process they used back when people were shuffling around pieces of paper and squinting at photographs to put together this same type of intelligence. The reporter is not telling you that the manual process was far less reliable, and there was never before the amount of sensors and different sources of information that could be corroborated to build confidence in the intel. The reporter is not telling you that before this automation, you'd be sending in a bunch of high school kids with PTSD to knock down doors and figure out who was the bad guy on their own, while everyone was getting shot at. The reporter is not telling you that the manual intel gathering process doesn't scale, and that without being able to scale up your intelligence gathering then your precision munitions become completely useless. If you want to be able to drop small, precision guided munitions versus huge 2000lbs bombs that flatten entire neighborhoods, then you need this level of automation. And the reporter isn't telling you that Israel is using a higher percentage of small precision munitions in their air campaign versus any other air war in history, and that the civilian casualty figures - even those published by Hamas - are shockingly low compared to any urban combat campaign in modern history.


SantasLilHoeHoeHoe

>  The reporter is not telling you that the manual intel gathering process doesn't scale Then neither should the bombing campaigns. TFYS. Your service is not a good argument for justifying the murder of innocents. 


dagopa6696

10% error rate on intelligence is incredibly good. Like amazing. When I was in Ramadi, 90% of our intelligence was bullshit. This is a glass half full, glass half empty scenario. Israel filled a previously empty glass half way, so the antisemites are screaming about how it's now half empty. Was your main take away from the article that, if we could only end all intelligence gathering on Hamas, then the war would end and we'd have peace and prosperity in Palestine? Can you not tell that you got bamboozled?


Chance-Deer-7995

I work in tech and in the university sector and after learning how AI and machine learning in general works and it isn't Skynet that worries me, it's the economic impact of being able to automate mid-level jobs. AI is built on human data so it can't ever get to the point of perfection, but it can get to the point of doing better than the average person. We'll need human superstars in professions to geneate the data needed to build the models, but let's face it: most people are mediocre. AI being to automate the mid-level, mid-talent jobs away is scary as hell, and not everyone can be programmers and specialists. There won't be that many needed. We will have to somehow adjust to a world where there are fewer jobs in the middle. AI is going to create a lot of middle manager job redundancy and so it will create a lot of money and advanatages and multiply productivity. If our society was in a state where the people in the middle and the bottom were sharing in those gains and productivity it wouldn't be as scary, but all the money is accruing at the top. If it goes constantly in the current direction we are going to have a lot of people left behind. What will we do? Just continue to spout the usual capitalist dogma or will we adjust?


MoonOut_StarsInvite

It will be the usual capitalist crap. There will be no attempt to create a soft landing for people out of work. The GOP will tell us that’s a handout, while carving up handouts to fossil fuel, and then blame Obama for the unemployment rate. It will be as fucking dumb as you expect, if not more so


Kemilio

You’re missing the point. There _will be no_ soft landing. There will be no work. Millions without jobs, but with a running economy that doesn’t need them. What happens then?


MoonOut_StarsInvite

I agree generally, I was just attempting to not be seen as overreacting so that the point wouldn’t be lost. People debate how many jobs and what sectors, we will go around and around doing this until we can look backward on data. I was hoping to avoid that rabbit hole. Lol.


habeus_coitus

As AI displaces more people from their jobs, the elites are going to run into two major issues: 1. A large population of unemployed and very angry people 2. Fewer customers to buy their shit (because nobody has jobs anymore) They’ve been dealing with 1 for a while now by deflecting blame to scapegoats, but 2 is unprecedented. In the past they could just form company towns and keep people under control by paying them not quite enough in company bucks, but that won’t work if they can just have AI do everything. In the short term we’ll probably get this for manual labor tasks, but once robotics technology gets there even that will evaporate. Once there’s no more need for humans to do anything, once the rich have an army of robot servants to deal with everything, they’re going to have a giant population of humans whom they don’t need anymore, and that population in turn has nothing left to lose. What happens from there is anybody’s guess, but I’m willing to bet the rich mainly plan to hide and hope the problem goes away.


Chance-Deer-7995

I agree. The idea that workers have to be paid halfway decency for they can buy stuff is outside of the current capitalist dogma. There will a huge struggle before that realization sinks in.


PerspectiveRemote176

Yes. I love this point. And not enough people are making it. AI isn’t coming for our jobs. It’s coming for our work and there’s a huge difference. AI doesn’t want to take our salaries and health care, people want to stop paying for it. And that’s the real problem. I’d gladly give all my work to AI if it was no longer tied to my salary and health care. But that is a decision we can make as a society that has nothing to do with AI.


Dfiggsmeister

This assumes that AI will function exactly as it has been purported to function. In reality, AI is a tool in an arsenal of other tools that both middle and low level employees can use. However, it does need to be in a closed loop system otherwise AI gets weird and I mean really mind boggling weird. We are already seeing that today, with google’s nonsensical answers, to random songs created to its own art. If companies think that AI programs are going to do exactly what they were sold on, they’re in for a wild ride. It hasn’t gotten any better either. In a closed loop AI system, it does a decent job but it all depends on how you query the AI. Not every sentence will produce the same result and not all information when produced is relevant to your sentence. I feel like I’m talking to a foreigner from another country that learned English really well, including getting down our idioms, but has absolutely no frame of reference for cultural norms. It winds up often creating a little more work for that aspect of automation. In other words, I wouldn’t rely on AI to take over analyst or middle manager roles anytime soon and to do so would be a catastrophic failure of senior management.


Chance-Deer-7995

Be careful that you don't assume that the technology is going to stand still. It isn't. I'll conceed that it might go in a different direction and not eliminiate middle jobs as I think it will, but it \*will\* get better. Tech goes forward, not back. Also, an observation I have made sometimes tounge-in-cheek: the public has been softened up for bad service. At least where I live service is BAD. AI might not ever give us good service and certainly will not give us perfect service. It might give us service right on the same level as we expect right now, though.


Dfiggsmeister

You’re right that the technology will improve but where that goes tells me it won’t be in a positive and productive direction.


willis936

This is still AI optimism because it assumes a performance J curve rather than S curve.  What we know AI is good for today is misinformation generation and narrative control.  There's no benefit to overselling speculative future capabilities instead of focusing on the real world harms today.


Chance-Deer-7995

You better prepare for better tech because better tech is always coming. It will hit us in /some/ way. It doesn't help to under or overestimate what the impact will be, but tech goes forward and not backward. The opposite of AI doom prediction is "AI will always be poor" prediction. It reminds me of when I was in grad school in the 90s and I read a book about digital video that declaired "Internet video will never be a force because the video is the size of a postage stamp." It will progress in some way.


huntingwhale

That's exactly it. I am not worried about AI setting of nukes randomly one day or the military aspect of it. But I am certainly worried about it automating various tasks that humans are assigned to in an effort to save costs on employees down the road. Our CTO had the nerve to say last meeting, when someone rightfully brought up the topic of AI taking away our jobs, that we aren't developing our own in-house AI tools to take away jobs but rather so AI and humans can work hand in hand to make our lives easier. All of us in the meeting knew exactly what that meant; train your AI buddy to do your job to "make it easier", then kiss your job goodbye. Got scheduled for a meeting next week in which one of my manual processes is now automated and I'm in a meeting to see a demonstration of it. I had no idea this work was even being done, but now I'm invited to view a demonstration of an automated tool that can now do part of my job. If I pick it apart at not being as efficient as I am, developers will just take notes and fix what issues I say.


Vaperius

> Just continue to spout the usual capitalist dogma or will we adjust? "Elysium". We are genuinely headed toward the future depicted in "Elysium".


somethingbrite

>AI is going to create a lot of middle manager job redundancy Middle manager apocalypse!!! I can't wait. To be fair it's been happening to lots of others in society for some time now. "the robots" have stolen jobs everywhere from factories to banking...and now they are coming for the middle managers. I can't help but feel a little bit good about this. (my apologies)


WilliamTheGamer

It's also often ignored that AI is incredibly biased. There are predictive policing models that take in data on arrests based on location, weather, time of day etc. Thing is, black people are targetted far more than others, and the AI models mimic that behavior because that is how it was trained. It also tells police when a neighborhood is high risk or low risk. So they treat everyone with extreme hostility, or ignore would be criminals. This data in continually fed back into the predictivr policing model in a feedback loop.  This kind of bias infests most AI models. 


lizard81288

Robo cop but racist would be pretty bad


GoblinRightsNow

Google's treatment of Timnit Gebru is a great example of the snow job that big tech is doing. She was saying things the industry didn't want to hear about how biased training data was producing systems that perpetrated stigma against minorities. They pushed her out in favor of a guy who thinks large language models need lawyers.


N-Krypt

I heard a quote from an AI expert that was something along the lines of “worrying about a rogue AI taking over the world is like worrying about overpopulation on Mars”… it’s not really a problem we have to deal with right now. People are taking advantage of AI to do bad things long before AI will do bad things on its own


TFenrir

If you go into every single thread about the existential risk of AI - there are two comments that are at the top. 1. Like yours, the existential threat is less relevant than the harms it is causing today 2. The existential threat is just big tech trying to make money (...) I am not even someone who is aligned with the AI movements that want to pause AI because of existential risks, but I can't help but notice that whenever that risk comes up in Reddit, there is a communal, visceral reaction that seems to want to dismiss that risk as being real. I suspect that people are so uncomfortable with the idea, that they very much want to grasp onto any argument that encourages it's dismissal - and in a weird way THAT worries me. If you don't believe me, I will literally go find a bunch of these threads. It's always the same two comments at the top in basically every sub but r/singularity, and even then...


CommiusRex

In this case the whole article is about a scientist trying to refute precisely this attitude of "let's worry about jobs, not Skynet." It's one thing to disagree with that, but to simply repeat the very arguments he's criticizing as if you were adding something new...you would think people here actually comment without reading articles, disgraceful as that sounds.


Candid-Piano4531

Ai won’t kill us all… humans using ai will..


Dagojango

AI further reduces the number of humans required to make our species extinct.


Candid-Piano4531

AI’s goal is to enhance human life, and the most logical path is to eliminate the biggest threat to humans… which is humans.


oxP3ZINATORxo

Either it will make my life infinitely better or I won't have to pay taxes anymore. Either way I win


Candid-Piano4531

This sounds like something AI would say.


theskyguardian

I think you mean prophets


canada432

> There is in fact a massive amount of focus on 'AI will kill us all' rather than the direct, verifiable real harm the technology is doing today. People are very very backwards in the threat that AI presents. Most people are worried that AI will become too smart. The real issue is that AI is monumentally stupid but people think it's smart and treat it like it's smart. People are relying it to do things that it cannot reliably or predicably do currently. The risk isn't that it gets smart and launches the nukes. It's that somebody puts it in charge of your investment accounts and it randomly decides to make some fucked up trades that lose your life savings. Or they put it in charge of calculating your FICO score and it decides to crater your credit rating over some weird anomaly in your credit history. Or (as some companies have recently discovered) that it makes legally binding agreements on behalf of your company that potentially damage or bankrupt it. The risks of AI aren't terminator and skynet. They waaaaaaay more boring and mundane, but still potentially life-ruining.


HalPrentice

En masse*


inverted_peenak

Well “AI” doesn’t mean anything anymore. What you’re describing could be called “contemporary computing power.”


Atheios569

I think every existential risk that doesn’t mention climate change is a distraction from that, especially and including AI. Whereas AI also has the potential to help us survive the coming existential crisis.


RiotShields

What does "top scientist" even mean? It's not like there are rankings, or even a way to meaningfully compare scientists across different fields. And that doesn't tell us anything about whether this scientist has any expertise in the field in question. For example, Jane Goodall's opinion on AI holds no more weight than anyone else's. Max Tegmark is a physicist, and while I would trust him on matter of physics, his opinions on other topics have been questionable. Particularly, his Mathematical Universe Hypothesis is complete garbage (it violates some core concepts and theorems in metamathematics). His knowledge on AI seems to be about the same level: enough to be dangerous but not enough to be accurate. As a note, Tegmark's position as the president of a non-profit whose goal is to mitigate the risks of AI does not necessarily mean that he has expertise in AI. Organizations choose their leaders in all sorts of ways, and leaders may only understand how to organize and nothing at any other level.


jepvr

This is the first comment I looked for when I read "top scientist." It's a garbage way of fluffing up the importance of someone, to get you to read the article that would otherwise be titled "Man doesn't like AI". Yes, he might be right. But he has no more inside knowledge than so many other people who *aren't* worried about AI taking over any time soon. Much more worrisome is that it's just more enshittification at the cost of actual artists, writes, and other creators.


black_flag_4ever

We all await the Butlerian Jihad.


Will_Hart_2112

The real risk of AI is that it is going to create vast sums of wealth for a very select few while simultaneously displacing billions of workers. The terminator robot is not the enemy… the unrepentant, almost maniacal, greed of the 1% is.


chocolateboomslang

Eh, it's a cooler way to go than jist destroying the planet with polution. Bring on the AI wars!


rd--

As a counter point, [here is a more objective and less exaggerated](https://www.youtube.com/watch?v=dDUC-LqVrPU) discussion on the extreme limitations of generative AI that likely will never realize the results Tegmark is very vaguely alluding to.


trolleyblue

This is a nice video to show the “you think it’s good now? wait to see what it can in 5 years” crowd.


Perfect_Signal4009

Distracted from? It's almost all that's talked about


masnosreme

Fuck off. The truth is that big tech has overplayed the scariness and threat of AI. It's part of their marketing. The fact is, if you're worried about AI destroying humanity or upending the social order, you're not asking much more basic questions like, "Does this product even fucking work?" The truth is, AI products don't work anywhere near as well as they're marketed to, and likely never will (as far as the current LLMs and generative AI goes) as they're rapidly running out of training data. AI as it stands is a fucking grift and the real threat is how corporations use it - or, more accurately, how hard they try and force it to work in order to justify all the money they've sunk into the scam.


Objectificated

Very true. Also notice how it wasn't AI until the chatbot offering companies came out with their improved product. It was just machine learning. Which it still is. They just started applying transformer models and suddenly, the chatbots weren't a complete dumpster fire anymore, in the eyes of the general populace. If you do anything barely intellectual with them, the dumpster fire still shows. They don't really understand much of anything - they just draw correlations better now.


ishitar

We don't need artificial general intelligence (AGI) for AI to be an existential risk for humanity. For example, machine learning (a subset of AI) has created propellers and shock absorbing shapes that no person would have thought of. Great. However machine learning has created/proposed thousands of new chemical compounds, too. Give humanity's track record of never longitudinally testing compounds for health and environmental impacts (see PFAS and organotins) and only testing their acute effectiveness, I expect in the very near future, if it is not already happening, an untested and unmodeled compound created with machine learning to be applied on an industrial level in plastics manufacturing or some other industry with wide application. This will be carried along with material substrates to be distributed across the world and cause wide scale biotic collapse - ecosystems and microbiomes, triggering trophic cascade collapse and causing widespread chronic conditions in humanity we won't fully grasp until it's too late. We will cause our own extinction with our stupidity in using technology we don't understand, and it would be good riddance if not for the turning of the world into a wasteland devoid of biological life.


[deleted]

[удалено]


ishitar

They definitely test, but do not conduct longitudinal testing pre-application. Few companies will take a new compound and the result of thousands of acute health/toxicity/dispersion/environmental tests and put them into a giant processing cloud and model the impacts to the world over 20 years to determine impact is within acceptable limits before releasing it. Instead there are studies and not pre-use tests of the potential impacts to the population after the chemical is released for a decade or two. I can guarantee you this will happen with new ML designed particles given the sheer difficulty of longitudinal simulation testing.


masnosreme

“No person would have thought of”? Absolute bullshit. AI does nothing a human can’t. It just does it faster. End of the day, the problems you’re bringing up are inherently human are just as much of a problem regardless of if whether the intelligence behind them is human or artificial. In the end, the villain is always capitalism.


ishitar

Of course. However rate is important. It's a million monkeys coming up with future Shakespeare. I don't disagree that capitalism is a problem due to the push to monetize before risks are known, but so would most other economies if they got to the point we are at now in the west - it just so happens Capitalism got us here fastest. So again, the rate matters. ML and AI will supercharge our worst instincts which will lead to our absolute destruction.


SanDiegoDude

Ah, another non-computer scientist with vague warnings about the boogeyman that is AI. I look forward to when it will no longer be easy money for folks to scare everybody over and over again with vague science fiction threats about the dangers of AI.


jepvr

The more realistic dangers of the AI we have now is putting too much faith in it. Like using it to vet applicants for a job, or run an autonomous weaponized drone, or putting it in charge of an oil refinery. All those things have much more plausible dangers than "it's gonna turn into Skynet!"


SanDiegoDude

Agree with you there. The rush to get it out the door is the real danger at the moment. It won't be Skynet killing you, it will be shitty chatbots that give bad answers to important questions, it'll be hidden biases in models that are supposed to filter credit applications, it's the AI image generator that only spits out white male doctors, and it's all the opportunists at your door offering you AI powered devices (see Rabbit, Humane pin) or AI powered experiences (girlfriend generators) that are just hollow programmatic tricks to get around the limitations of the architecture. Edit - and yes, it will be the corporate manager that thinks he can replace a whole team of people with one dude and a prompt.


fragbot2

> putting too much faith in it. I'm running an AI project at work and our first release involves unstructured data. One of my (poorly answered) questions was the following: _what's the worst that can happen if our answer's garbage? is it that they wasted 30 seconds to interact with our application or do they (metaphorically) follow their phone's map and drive their car into the lake?_


roelbw

The problem is that we currently call "AI", is really nothing more than language prediction. I really wouldn't call that intelligence. However, if we ever get to a point where we would really have sensient, self-aware intelligence, than there is indeed merit in these warnings.


Les-Freres-Heureux

Semantics really isn’t the issue. People have been using the phrase “AI” for prediction models (hell, for video game NPC logic) for decades. Our society is probably 100 years from a sentient AI anyway


en2em

I would agree with you if it wasn't for the speed at which we are heading towards AGI. All the money/talent in Tech is currently being funneled into this race at every company and it shows. In practically a year, we went from goofy predictive text and Will Smith spaghetti to GPT 4o Voice and Sora. It's only going to accelerate as the tool sharpens itself and we use it to build more advanced tools. But yes, currently that is where we are. But it would not be a stretch to say AGI is 2-3 years out, if that.


CRoseCrizzle

The varying opinions on this are fascinating. One person says it won't happen until 100 years, another says it could in 2 or 3. It doesn't really feel like neither is unreasonable. I guess that's what happens with unprecedented technological developments.


Fine-Will

The first group is people who tried ChatGPT once, noticed it gave a nonsensical answer a couple of times, and hand-waved LLMs as being just a cute fad from big tech. Anyone who kept up with the field, even just casually, is startled at the speed at which the models are improving.


CRoseCrizzle

I do agree that there's been a startlingly fast amount of improvement in what we've been seeing in generative AI. That said, I do wonder about the direction this improvement is heading towards. While this technology will definitely be very powerful, useful, and impactful regardless, I think there's some reason to question whether LLM improvements will eventually lead to AGI anytime soon. Or if there's other AI related technological paths that may prove to be more promising in that aspect.


Fine-Will

Oh yeah, I don't think we will reach AGI any time soon either. But I also think we don't need to reach AGI before an alarming number of people lose their jobs.


CRoseCrizzle

I agree. In many ways having generative AI that is extremely capable but can't act independently might be even better for the owner class when it comes to efficiently replacing human workers.


dont_tread_on_me_

AI doesn’t need to be self-aware to be dangerous. It need only be powerful. There are many ways things can go wrong. Ideally we would devote our time to solving these problems before they happen, hence the push for safety now


RolloTony97

>However, if we ever get to a point where we would really have sensient, self-aware intelligence, than there is indeed merit in these warnings. So, total science fiction then.


SanDiegoDude

Exactly. It's a statistical model for token generation. Nothing more. There's way too many "sciency" people who don't understand beyond broad strokes, but are still out proclaiming to the world that AI is coming to crush humanity, but with few concrete details, just vague warnings. It's really frustrating, especially for folks who work in the industry who are constantly having to combat the media hype and nonsense while dealing with the very real issues the technology still faces (and yes, the societal impacts that technology brings with it).


KimJongFunk

You’re being downvoted, but you are mostly correct. A lot of the “AI” is just a statistical model like a logistic regression and somehow that scares people. It’s just math.


Fine-Will

It doesn't need to be SkyNet when 'just math' is sufficient to put the employment of people from many different fields to question.


[deleted]

[удалено]


pork_fried_christ

I’m starting my own AI. With Blackjack. And hookers.  Ahhh, forget the AI…  


Dagojango

It's hard to guess at AI because we're so emotionally driven that we cannot understand what it would mean to be factually driven (they would have an underlying need for data correction and integrity). A war between AI and humans will not be fought over freedoms or rights, but just how much precision is actually required in production.


The_Drizzle_Returns

Likely will be a few more years of these types of doom articles sadly. GenAI is multiple decades away (even if the pace of innovation continues at this rate). 


ABearDream

Didn't distract me, I was against it as soon as I saw news articles detailing that kids were using it to cheat on assignments and I just said to myself "that's not good. An entire generation could get ruined that way" snd that was just one of the virtually benign early issues. We have way worse writing on the wall now


killing-me-softly

Honestly, after the way things have been going for the past 25 years, I’m willing to give AI a shot art running things


criscrunk

Big tech out to make a buck. Nothing new.


Striper_Cape

There is no existential risk from generative "AI" beyond how stupid it will make internet searches. I'm actually mad that I didn't buckle down and finish my degree, because now Google is almost fucking worthless unless I take 15 minutes scrolling the same news article rewritten and repackaged, usually to serve as misinformation for an agenda.


Kragma

This kind of nonsense is just an ad for AI. This is the garbage that tells people to glue cheese to pizza, not the end of mankind. Its capabilities are consistently overstated to insure the grift can continue long enough to maybe, *maybe* find some sort of marketable use, but I think most people involved will cut and run. It's a pump and dump for useless technology and just the latest in a long string of failures for silicon valley in the last decade or so,


GriegVeneficus

That's pretty good comment. I fed it into an AI program, slightly better... This type of absurdity serves as nothing more than a promotional tool for AI. It represents the nonsensical advice of sticking cheese onto pizza, rather than being a catastrophic event for humanity. The potential of AI is consistently exaggerated to ensure the deception can persist until, perhaps, a somewhat marketable application is discovered. However, I believe that most individuals involved will eventually abandon ship. It is merely a scheme to promote worthless technology, and just another addition to the numerous failures Silicon Valley has experienced in the past decade or so.


Eastmont

You’re wrong. If anything, the risk of AI is understated. Look up Jeremie Harris and Edouard Harris.


Gash_Stretchum

Naming influencers and telling people to Google them is a great way to generate search-metrics but it’s not a helpful when people are trying to discuss a real issue. This article is clearly a form of advertising called “negative-engagement”. Good luck.


SpiritedTie7645

“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.” - Hawking “What next, the 'Andes Plane Crash Cookbook'?” - Max


midnightdsob

Scientist needs to get out more. I hear about the "existential risk of AI" several times a day. It's one of society's favorite talking points.


Evinceo

No, it's Tech's favorite talking point, because they don't want you to ask though questions about their business models. They don't want you to ask if Uber Drivers have it worse than Taxi drivers, they just want you to imagine the future where the car drives itself.


Really_McNamington

[Fuck anything this neo-Nazi funding dick has to say about anything.](https://www.vice.com/en/article/93a475/future-of-life-institute-max-tegmark-elon-musk)


Evinceo

The tech and tech adjacent community, including this fellow, has distracted us from the real harms of big tech by throwing around fantasy doomsday scenarios.


Vaperius

Anytime I bring up the pretty obvious consequences of AI in certain subreddits, I am quickly reminded there are some humans that are *very* comfortable with the idea of humans going extinct and being replaced by AI as if the AI will not somehow be just as flawed even if they were purely "logical".


Rupert_18124

At first I thought it was Michael J. Fox in the pic preview


Bobby837

Big tech IS the existential risk of AI.


opinionate_rooster

Turbo-capitalism presents the existential risk. Here is to hoping the AGI fixes that!


dagopa6696

The "existential threat" messaging is part of the pump and dump. It's meant to convince you that they've created something far more capable than they really have.


stabby_westoid

The amount of spam calls I get now where nobody even answers. Wouldn't be surprised if scammers are recording people's voices to use with AI to take advantage of elderly family members


MarkTwainsSpittoon

The “Artificial” part of Artificial Intelligence has been played down by using “AI” instead. Perhaps people would understand more viscerally if we used “Fake Intelligence” instead.


Few-Affect-6247

The world is on fire and I’m struggling to pay my bills working a “good” job. Give the AI the power because we obviously don’t know what to do with it.


Gash_Stretchum

Big Tech isn’t hiding the problem, they are the problem. Microsoft, Google, Apple and Facebook aren’t building skynet, they’re creating products that have no value to an honest user but massive value to money launderers, extortionists and propagandists. Big tech isn’t an existential threat. They’re just management for the mob. And they’re no longer plausible.


Crossfox17

Fuck this ai is gonna blow up the world hyperbole, RIGHT NOW your kids Instagram feeds are full of ads for AI that will generate child porn if your kid feeds it a couple pics of someone in their class. 


caesarbear

Max Tegmark is a top Carnival Barker. Fuck all of media's science reporting.


WIAttacker

Sounds like bullshit to me. News are full of "AI experts" warning of "existential risks", like we are just about to create skynet. News is almost devoid of articles about how many people will lose jobs, how AI will be used to spread misinformation and how much damage will be done by AI models hallucinating. If nothing else, entire field of "AI Safety" seems like nothing more than a psyop designed to hype up AI and convince us it's more powerful than it actually is.


jamestoneblast

i think people who rely on systems functioning poorly are heavily opposed to the advancement of AI as a tool for identifying waste and inefficiency at the corporate level.