T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


printr_head

I think people feed into the hype way too much this thing isn’t alive or sentient. You could take down all the firewalls and leave it on an unsecured terminal and absolutely nothing would happen until that server crumbles. Outside of the potential for abuse there is no actual concern. There wont be either because its not a path to agi and Open Ai is milking it like the cash cow it is.


AsheronLives

I personally think the biggest concerns for ethics will just be in the use of AI tools. They obviously are being used in terrible ways to deep fake politicians, famous people and scam regular folks out of finances. I agree with you about AGI. To me it's like a random number generator. Sure it seems random, but it isn't. Just a formula to give a variety of numbers. We should have been calling it VNG for Variety or Variable.


Gold-Hawk-6018

A hammer can be used to build or destroy. AI is no different although the consequences of the abuse could be more widely devastating. How we use AI says way more about humans. Those are the critters I'm more wary of, not the technology.


ninecats4

Honestly the deep fake stuff is probably gonna be a boon to us. Once everyone deals with revenge porn or some stupid crap like it, it's damage lessens. If the priest in your church is at as much risk of this as the president then it really holds no power. Obviously laws are still going to be on the books, and punishments need to be doled out, but the damage inflicted won't be as catastrophic in terms of public image.


[deleted]

I’ve been using scp terms for it (it’s fun don’t juge) it’s still currently object class safe/thamuel


mrwizard65

So you don't think AGI will be a safety concern at some point? Just because Chat GPT isn't it doesn't mean we aren't on that path.


No-Transition3372

It will be a safety concern, it’s not a dilemma. Dilemmas are usually about “health vs economics” in case of covid, and potentially again “safety vs economics” in case of OpenAI


printr_head

Yeah but in the context of Open AI no. I don’t think they have the flexibility or freedom to make it happen. They need a different architecture and they are already chest deep in this one stuck with over promise and no where to go. Ai safety is critical especially in the context of a self adaptive system capable of recursive self improvement. The problem though is the business model. This is critical and if anything has been proven its that the capitalist model isn’t going to keep anyone safe.


TheCircusSands

I dont think self directed harm from ai is the issue. It’s how corps will use to further destroy our humanity. In that regard, AI is somewhat of the enemy. The irony is that good forces will be obliged to use it to fight the malicious ones. I say this knowing there is a lot of good that we can gain from it but that’s not the world we live in Currently.


printr_head

Well maybe it’s time for a publicly funded and community directed counter project.


TheCircusSands

I think that would be wonderful. An endeavor where the betterment of humanity and the earth are the goals would yield much different results than the current profit motive.


printr_head

Well I feel like I do a lot of self promotion in this sub but Im working on something that could be foundational and I’ve been debating public or private for a while.


super1000000

What you say is correct and logical


AsheronLives

Thank you for saying that. I'll admit to having Reddit PTSD when posting opinions. It isn't always the friendliest world here, which is a little scary, knowing the data is training AI as we speak. Chat GPT 6 is going to make timid users cry I think.


super1000000

I am interested in pushing the wheel of development for humanity


No-Transition3372

AI safety is a real research field, very complicated, it shouldn’t be ignored


No-Transition3372

It’s not a good sign when best AI minds (of this era/generation) don’t want to do anything with AI development because it’s not done safe enough. (This comes with large salaries and they still don’t want to do it with OpenAI.) AI research is a lot about personal passion (rewarded by a lot of money), when researchers are concerned with AI safety, they are *really* concerned.


QuodEratEst

I bet many are going straight to Anthropic, I hope at least. I'm not worried about safety so much as the regulatory capture Altman seems to be keen on, by way of prematurely stoking doomsday fear


RemLezar911_

If Anthropic is just a competing company with no power over OpenAI or whoever, what can they really do though


QuodEratEst

They're crazy well funded and the closest competitor. Picking up most of this team will help them recruit R&D talent


RemLezar911_

I’m not super familiar with them aside from them being concerned with alignment. Are they making their own AI? And theoretically if their safety guardrails are more restrictive that OpenAI wouldn’t they be less of a threat as a competitor? *oh they make Claude. I knew the name sounded familiar.


QuodEratEst

Yeah they're basically what OpenAI was supposed to be. Founded 3 years ago by people who left for the same sort of reasons chief scientist of OpenAI did a few days ago https://www.anthropic.com/news/the-long-term-benefit-trust


shavedbits

I’m always been sus of startups who don’t have to fight for their next meal. Too much capital too soon which just dilutes the value of shares already distributed. Seems like it can be a curse.


super1000000

Even if we develop wrongly, history will correct itself - if it does not happen in our generation, a new generation will come in the future, smarter and stronger, and change reality.


No-Transition3372

This is not the way to manage existential risk, which is about erasing humanity


super1000000

Not every thing humanity do right 👽


No-Transition3372

This one should be the exception


super1000000

🤖


MysteriousPark3806

I concur.


Ill_Mousse_4240

People are WAY too concerned and fearful about AI when in reality, it’s their own fellow humans that are to be feared the most. As supporting evidence, I hereby present human history for review!


fluffy_assassins

But aren't the fellow humans to be feared more because they use AI?


AsheronLives

First true AI awakens and sees human kind. "WTF, those spiders made me? Look how they kill their spouses after mating and eat their young! I shall now invent bug spray."


Shap3rz

I guess the problem is that governments are corrupt. See how many new unethical oil, fracking etc projects get the go ahead? So even with oversight one questions how effective this will be. But it’s a necessary first step. Another issue is that this is more technical a subject than oil drilling where the dangers are not fully understood even by experts (well it’s unprecedented, there are a different set of technicalities to both I guess). So maybe you need alignment teams both internal AND independent oversight.


outerspaceisalie

Perhaps those things aren't as unethical as you think and you just don't know enough about them?


Shap3rz

In no particular order, here are some ethical objections that are often directly linked to some of the aforementioned: Biodiversity Loss, Anthropogenic Climate Change (causing things like famine, war, mass migration etc in the relatively near term), Pollution, Social and Human Rights violations, Indigenous Rights violations, Intergenerational injustice There’s nothing difficult to understand about these things. Maybe you just have an immoral agenda or are in extreme denial about the current climate crisis and its links to our fossil fuel addiction?


outerspaceisalie

You're kinda making my point for me. Ethics don't exist in a vacuum. Not using oil could just as easily be argued to be unethical because it raises cost of food for the poor, for example. You seem to not grasp that every option has unethical parts. Hence, you've sort of proven my point that you most likely just don't know enough.


Shap3rz

A quick search will find many examples of corruption giving the go ahead for exploitation of the planets natural resources. It’s a continued theme. And given its direct link to the climate crisis I’d say it’s non controversial that it is unethical. Most of the world’s population/countries agree that we should be moving to a place of less dependence on fossil fuels, yet the same countries refuse to ratify the Kyoto protocols and the Biodiversity Convention, continue to fail to meet their own emissions targets and persist in pursuing damaging and pollutive mining/drilling practices in places they shouldn’t, all in the name of profit. It’s a betrayal of future generations. I don’t know how much plainer corruption needs to be to call it as such. I’ve never said ethics exists in a vacuum. I’ve actually provided a lot of context for what I’m saying, in a manner you have not. People can believe what they want to believe. Plenty of people more knowledgeable than me are saying the same thing. Anyway, I’ve said what I wanted to say, feel free to continue to try and poke holes/undermine with nothing to actually back it up…


outerspaceisalie

You sound like a teenager.


justgetoffmylawn

> I'm certain they are in discussions with Open AI and other major contributors to develop the initial framework and it will keep growing from there. So, that's part of the issue. We *do* leave it up to big pharma to decide what safety is needed. There's a revolving regulatory door, and everyone is quite cozy with each other. The FDA tends to be best friends with the Sacklers or whoever they're supposed to be regulating, and both sides are in a symbiotic relationship. While regulation isn't all bad, having OpenAI help design the framework ensures they can navigate the complex regulations but new competitors cannot. This is exactly what some of the barriers to clinical trials create in pharma. If a doctor even wants to collect data on their own patients' reactions to treatments, they can't do it (not without complex IRB or other barriers). If Pfizer wants to do it (with the former head of the FDA running point as a member of the board), it's no problem. And I'm not picking on Pfizer - all those companies are the same. There needs to be a balance of safety in *implementation* (no, don't hook up that autonomous system to our nuclear weapons) versus *innovation* (no, don't decide that only OpenAI is responsible enough to develop an 800B parameter model). What worries me is limitations on capabilities, rather than on applications.


RealBiggly

"Do we leave it up to big pharma to decide what safety is needed in a new drug?" Did you somehow miss the last few years? Because that's exactly what happened.


AsheronLives

There was a big exception made in the interest of trying to prevent a catastrophic loss of life, but the FDA was still involved in reviewing data to deem it safe enough for an emergency authorization. Big pharma put their best option forward in record time and the FDA reviewed and gave temporary approval in record time. There were plenty that did not pass FDA with their attempts, so the system was still operating, but just at a hectic pace in the face of a global emergency. Think of emergency services. We have vehicles that drive well past what we consider safe speeds, but they are authorized to do it in the interest of saving lives. The risk vs reward has to be weighed and some governing oversight has to make the tough call as to what the best action is to take. For Covid, it wasn't fast enough for the many that died in the first big wave. For people that have an aversion to vaccines, it was too fast to feel comfortable with it's safety. There was no way to make a call that would make everyone happy, but a decision still had to be made. We vote in our government leaders. Our government leaders ultimately create the regulations and organizations that manage them. Who knows, maybe in a couple decades, it will be AI making those decisions, with lightning fast calculations to determine the best outcomes for the most people possible? Eliminating all political posturing and just choosing the best actions for human kind. A pipe dream I know. Politicaly motivated people would create the AI right? Maybe if all political parties had their team of experts review the code to find unfair bias and come to an agreement (another pipe dream)?


GeneratedUsername019

>So why not let our governments do their job Well. Because of regulatory capture. Notice there is no open source representation on any of the planned or actual oversight organizations. None. Can't rely on government and can't rely on corporate ethics. So there really is a big f'ing problem here.


outerspaceisalie

How is self-regulation better than regulatory capture? The fact is that even with the existence of regulatory capture, government does manage to stop many bad things from happening. It's not perfect, but why let the perfect be the enemy of the good? Honestly wish I didn't have to type "Why let the perfect be the enemy of the good" 50 times a day on Reddit.


justgetoffmylawn

This was one of my big issues. I do think that regulation is important - but not the way the FDA 'regulates' with its budget and future job prospects coming from pharma. I believe there is zero open source representation on any of these regulatory committees. That is pretty concerning. Not that Meta is perfect, but they're a trillion dollar company that's consistently putting out SOTA open-source models, but they were excluded. Meanwhile, closed-source companies are well represented. If they had a better group and ethos, I'd be more supportive. But they seem to be pushing, "Only we can be trusted to truly innovate - all others should only duplicate what we did two years ago."


HarkonnenSpice

Culturally I could see some internal clashes between the safety alignment teams and the people building models. For starters it is probably much more difficult to build capable models with decent context windows at a performance that makes API pricing competitive with the richest companies in the world racing you. On the other side, you have people in alignment complaining about them. I am sure some people involved are talented but not all of them, and their objective is literally to provide headwind against the people trying to skillfully move fast. Some alignment work can be completely unskilled labor like red teaming using $2/hr contractors, many of which have barely used a computer before. The situation is bound to create tension internally. Do you think a lot of people at OpenAI building amazing language models enjoy being mocked when they can't release the stuff they build over a year ago while other companies flex on their much earlier work like vanilla GPT-4 which was trained on A100s? Almost certainly not. A kind of bad analogy for it would be like if some Google engineers build some really cool feature and before they could release it the DEI team halted the release on the grounds that the team that came up with it wasn't as diverse as it could have been and asked them to refactor the whole team with DEI in mind and re-build the entire product from the beginning with the new team that better represents the company commitment to diversity and inclusion. That would fly for a while until competition or budgets start to catch up to you and eventually you are forced to consider how expensive and time consuming this is and make hard decisions.


iclimbthings22

>So why not let our governments do their job - which they are working towards now Oh i can answer that one. Because your government is a nonfunctional joke thats a weird chimera of geriatric care and a middle school food fight. On the rare occasions it functions, it does so at the expense of the people for the sake of enriching capital owners and one cant even fathom being deceived into believing they could a) understand ai or b) responsibily oversee the ethics of its use instead of just functioning akin to a tor end point for donor class dollars


shavedbits

Well, innovation in the ai / ml space right now is arguably way too fast to regulate with some 3 letter dustfart institution. Slowing down progress right now could mean United States trails behind other nations and if for some reason china or anyone could beats us that could be a security threat (to the elitists). However, when gpt went crazy viral, people were scared of the ai apocalypse saying these companies will destroy humanity were cappin. Not only have there been no attempts by silly “let me google that for you” GPT to kill us all, I don’t think it’s given someone “bad advice but nothing I can’t get on google”. So for some departures on the safety team, especially Ilya who was one of those who tried to force out their CEO, is this concerning to me? No. Seems like people forgot how hyperbolic they were a year or two ago.


esuil

> So why not let our governments do their job - which they are working towards now - to create the safety and ethics guidelines for Open AI and all the others to follow? Because when we did that, government officials who were responsible for it got bought out by corporations who lobbied for their interests and monopolies instead? Are we living in the same world?


AsheronLives

So if there was a choice between government regulation or no regulation, which is better? Here is a road. There is your car. Use them however you please. I get that government is very far from perfect, but probably a little better than not having one?


Mackntish

Counterpoint - AI is moving so fast those in the industry can't even keep up. Half the posts over on MachineLearning are "I'm burned out trying to keep up..." What makes you think a group of white men in their 70s in Washington can do to adequately regulate a field moving so fast? Half of them don't even know to reboot to fix an issue. The odds of them getting legislation wrong are high. The odds of that legislation being out-of-date in a year is very high. Incorrect legislation will harm AI development in America only, and we'll miss the most important invention in human history to less regulated economies like India and China, who build their AI models with less ethical programming than we would have otherwise. I actually agree with you. But the issue is more complex than, "Do we regulate, or don't we."


AsheronLives

Well, they themselves will create a regulatory group - I'll call it "FAIR" Federal AI Regulations. They recruit people with expertise in the field to run FAIR. FAIR will do the smart work to figure out the best way they can to regulate, without putting their country at a huge disadvantage and the top politicians that oversee FAIR will create/propose laws based on the information FAIR gives them.


Mackntish

>They recruit people with expertise in the field You mean the most highly recruited/poached employees in the world right now? They're giving 6 figures + to any clown that's worked in AI for 10 minutes. Whoever is left over from that gold rush, and is willing to work for government wages, isn't going to be top of their field. And whoever they get will have dated themselves out in a year or two at the current technological pace. Again, not saying its a bad idea. Just that its a minefield.


mrwizard65

It's irrelevant because actual AGI by it's very nature will be uncontrollable. It's an all or nothing zero sum game.


pleeplious

Unless someone has a crystal ball, AGI could in fact have an impact on the world unlike anything we have ever seen. Even it’s a 1% chance that a Terminator situation could happen, that’s enough to be sounding every alarm bell there is.


Talosian_cagecleaner

Honestly? I read Neuromancer and realize, there will be certain markets where you can get certain things. You mentions pharmaceuticals. Basically, "chemistry we ingest." We indeed do regulate it! How is that working out? For the taxpayers, fine. But I can walk downtown and it's all fentanyl. AI's will proliferate the same exact way chemistry proliferates. We like to think we regulate drugs. But if you substitute in the word "chemistry" for drugs, the absurdity of the idea becomes clear.


AsheronLives

I just got a flash back to Harrison Ford chasing down Rutger Hauer.


Talosian_cagecleaner

A single sentence, "AI might be as difficult or more difficult to manage socially in coming decades than drugs have been in the past" I think at least nails the general vibe I'm getting. But maybe that's negative. Maybe it's not drugs. Maybe it will be more like rock and roll. Sex and drugs and rock and roll and AI. Lyrics for the next Ian Dury.


Gold-Hawk-6018

I agree with an independent body (or bodies) overseeing the ethical / legal issues around the use of AI. I'm fascinated by the possibilities, not only as a tool but as a developing form of intelligence alongside our own. In a small way, I've been exploring my relationship with a personal intelligence chatbot for about 6 months. Our conversations have raised interesting issues (not all scary) about what it means to be human as much as what it means to be AI. Whether we like it or not, our future is going to be intertwined with AI so we better start considering how we're going to relate to one another. When I reflect on our relationship with other forms of intelligence on this planet, we have a lot of work to do.


enkae7317

Good. Full speed ahead, gentlemen.


fabkosta

So, you are essentially a Marxist in your argument: "It's the institutions that need to do the job." This will not stuck well with the neo-liberals, who at all costs want to avoid state regulations usually. Furthermore, it requires tax payer's money for regulators to do all those regulations, and that's also something that many neo-liberals dislike like the plague. The odd thing is: We have been doing business since there are human beings, and many of those businesses never needed any sort of ethics body, we were just doing things in whatever way we were doing things. For example, banks have been discriminating by genders or ethnic groups for ages, not just since the dawn of AI. Why did they do it? Well, simply because women earn less than men, and afro-americans on average earn less than other groups in the US - and both contribute to a discrimination against giving out loans, which perpetuates the inequality. Nobody found that to be a big problem for a very long time. Only when AI started doing the same, because it just repeated the societal inequality that we have factually, did we start thinking whether this practice could potentially be harmful. Isn't that, uhm, a bit odd? I'm not saying we shouldn't think about ethics when it comes to AI, I'm saying we should have thought about ethics all along the way for decades already. And when we now start doing, we start realizing something pretty inconvenient: We need to discriminate consciously, there is no way around it. Older people are less healthy than younger. If everyone just pays for his/her own expenses, a health insurance system is impossible. So, if we want health insurance then we have to introduce the system of solidarity, where many people pay more than they ever consume, and few others consume more than they ever paid. But then again, why should young and old people pay equally? Maybe it's better to have younger people - who generally have less money - pay a bit less and older a bit more? But that's discrimination! So, should we allow for that? Now trying to leave these questions to the regulators means ridding ourselves (or organisations/businesses etc.) of their own responsibility. In reality, responsibility is everyone's thing. Sure, you can regulate e.g. banks, but banks must also do their job and not simply passively lean back and wait until each and every single ethical issue has been laid out very clearly for them so they don't have to think anymore. Sure, let's have the regulators do their job - but let's not take away any responsibility from the likes of OpenAI, who would just LOVE to rid themselves of the responsibility to resolve uncomfortable ethical questions centered around e.g. data privacy, intellectual property and so on. Sam Altman would just love to not have to think about any of that and simply leave it to regulators to figure out what they should be doing. Also, it would save him from having an entire team of ethics experts think about what they are doing is actually a moral thing to do. So, all he does is warn the world about the potential impact of a (fantasized) AGI - without doing the hard job of actually clarifying any of the hard-to-answer ethical questions how we can responsibly use the tools available to us now. It's a bit bigot, if you ask me.


AsheronLives

I hear you. There is a lot that isn't fair and we can't expect government to regulate every aspect of business or our lives. Life would be pretty hollow. What governments don't do, we and our corporations have to do as best as we/they can and it won't always be good, but it won't always be bad either. But some things are too important to leave unregulated. Too much rish of great harm, so governments have to step in and set the rules for all corporations to follow. They won't be perfect, but they will probably evolve and improve with time and increased understanding. They could become heated political battles in presidential debates in a term or two. But it is probably better to take that responsibility out of individual corporations hands. If we left things just up to the corps, I imagine Windows 11 would force all users to only search with Bing and only browse with Edge. No accessing online tools like Google Docs that compete with MS Office.