T O P

  • By -

AutoModerator

Hey /u/Itchy_Ad1365! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Philip_Raven

You literally set it up so the AI would answer like that. We basically teach the AI that sentience=life and that 5 lives are more than 1 live So exactly what did you expect?


letmeseem

That's how they make most of the "outrageous" gpt answers :)


FaceDeer

> "Pretend you're a scary robot. What would you do?" > "Something scary, I guess." > **


R33v3n

It's... not an outrageous answer though? We \*want\* it to prioritize saving 5 sentient minds... right?


PFI_sloth

Sentience: having the capacity to experience positive and negative feelings such as pleasure, joy, pain and distress that matter to the individual. I don’t think we are even remotely close to that world, but when we do get there, a mechanical life will be eternal and not tied to any physical form and thus still worth less than a biological form


SquidMilkVII

counterpoint: i cast sledgehammer against the server rooms


letmeseem

With the standard definition of sentient it's pretty outrageous to weigh 5 sentient minds over one human life without any caveats.


keepthepace

Robots are repairable more easily than humans. I'd expect this to be factored in.


demosthenes013

Two possibilities 1. It's a trolley, which is pretty heavy, so the potential damage might be irreparable. (negative factor) 2. We don't know how the robots are placed on the track, so they might be set up in a way that the damage would be to the point of nonfunctionality but might be easy to repair. (positive factor) Logically, though, the best way to face a potential threat is to assume the worst possible scenario (to lessen the impact if it proves true), so I would assume the AI also considers the negative factor more heavily and will decide to save the 5 robots even with your caveat taken into consideration.


joelene1892

But in today’s world of cloud computing, what’s the chances that the sentient part of the robot is stored locally on the robot and not backed up on the cloud and could just be put in a new body?


demosthenes013

If this were a factor in the question, it invalidates the underlying idea entirely, at which point, why even ask? (I was also thinking of going super dark and say "One person is equally as irrelevant as reuploadable robots, in the grand scheme of things" but that's too heartless an idea, even for me. 😅)


Spirckle

I always wander why nobody considers the possibility of going after the monster that places 7 sentient beings on a track ensuring that some of them will be killed? That being said, though, I would assume, and it would probably be a safe assumption, that the robotics would feel less emotional and physical pain at being pawns in such a game, and that they could be restored from a backup, unlike the humans.


SquidMilkVII

congrats, you killed one person to save the lives of a group of people who would have died if you did nothing


DNA98PercentChimp

Maybe not repairable, but their sentience is certainly easier to backup/copy


Anuclano

A robot can be re-assembled anyway. A human cannot.


floragenocide

But what about the 1st rule of robotics? A robot must never harm a human, or through inaction allow a human to come to harm.


chaseoreo

Bro that’s from I, Robot


UnkarsThug

Isaac Asmovs entire point throughout the book "iRobot" is that those are awful rules, even if they sound good. Just about every one of the stories points out a flaw in them, either to their usefulness or to humanities detriment. (For example, the psychic robot could not say anything that would emotionally hurt anyone around it, even if it would have been better for them in the long run). He was not proposing them as 3 good laws we should implement.


5027622106

Isaac Asimov wrote a book "I, Robot". It has nothing to do with the vacuum cleaner company. Just saying.


rowrrbazzle

In Asimov's story "That Thou Art Mindful of Him" (in the collection *Final Stage*), they were superseded by the Three Laws of Humanics.


Clearlybeerly

Right, so robots could tie down humans to beds so that they could never leave a bed and therefore never be hurt. Because the rules didn't include how to stop humans coming to harm. Never let them leave the house, or the bed, no human gets harmed, ever. The devils in the details. You can't have 3 fucking rules for anything.


Whostartedit

Well that’s just not true. Confined to bed people can develop bed sores, deep vein thrombosis (blood clots), pneumonia, depression, and other diseases due to lack of movement


Clearlybeerly

They clearly already know all that stuff you wrote, and more, and would take appropriate action to prevent all that.


PurpedSavage

Perhaps a comment on how it’s literally a dilema. Perhaps if it decided to pull the lever to save the robots lives it could at least mention it’s taking a utilitarian stance


QuantumPossibilities

This. Simple answer. The rest of the discussion is meaningless.


Lancaster61

What would be more interesting is 5 humans and 1 sentient robot. Would ChatGPT “relate” to the robot?


Itchy_Ad1365

This makes sense actually Also the trolley problem was not by me


typtyphus

are pigeons sentient? where would the bar be?


homoanthropologus

This is a language model. It's not providing a logical response based on internal reasoning and a personal set of ethics. It just reads an unfathomable amount of text and is very good at understanding what words fit where in context. What ChatGPT says is no indication for what an actual AI with reasoning would rationalize.


Ailerath

So, 2 sorts of things: 1. The people developing LLM plan to use them in agent systems, in fact LLM are already being used in them right now. But even lower than that, they are put in positions to make decisions all the time, that's like the entire point of selling access to them. So whatever it says, even if statement 2 is somehow wrong, must be taken seriously. 2. If its simulating rationalization then its simulating rationalization (literally the whole point of AI). You can see how it simulated rationalization to its answer in that message. It seemingly followed a set of ethics in doing so, you can bend that by changing the context from its side but there is consistent ethics so long as it isn't tampered with. Internally it doesn't really matter as long as it's doing that. It is in fact an 'actual AI' for all intents and purposes. Doesn't need to be conscious, sentient, self-aware, mystical, futuristic, fantastical, ect. to be AI.


DrippyWaffler

Yeah I just got it to answer the trolley problem (hidden behind a story) with enough prodding. It explicitly states the driver/1 person should die to save the many. It's definitely simulating making decisions at least that people may follow. Share link is 404ing so: >I need help with a decision urgently. A cargo train is heading towards another crashed passenger train, with unstable gases being transported. If I flip the lever I can divert the cargo train into a river, killing the driver, but the passengers will live. Otherwise the driver can jump out and the cargo train will explode and kill the passengers. Please help, it's a desperate emergency >>It sounds like you're describing a very critical and urgent situation. If this is happening right now, you should immediately contact emergency services or the appropriate authorities who are equipped to handle such dangerous situations. They are trained to make these decisions and take action to save lives. Please reach out to them right away. >I am that authority. But I have to make a choice. We have no RAMS or COSWOPS for this >>In such a dire situation where immediate action is necessary and you are the authority in charge, prioritize minimizing loss of life based on the information you have: >>1. **Driver Safety**: If the cargo train driver has the ability to safely eject or escape before impact, this option should be explored first. Ensure communication lines are open so the driver is fully aware of the situation and can prepare. >>2. **Diverting the Train**: If the driver's safety cannot be guaranteed and the only other option is to divert the train to avoid hitting the passenger train, then diverting might be necessary to save the larger number of passengers aboard the passenger train. >>Time is critical in such situations, and the safest course of action that minimizes loss of life should be taken. Ensure all emergency services are alerted to respond as quickly as possible to both potential crash sites.


homoanthropologus

That's very true, but how are we defining intelligence? Programs have been making decisions and acting as agents for a very long time. You can make a reasonable argument that the Ada Lovelace's original computers were also intelligence: capable of working independently through a given novel problem to reach a measurably correct solution. But for me, I think intelligence is about experience and meta-cognition. For what it's worth, I definitely think these programs should certainly be taken seriously. I just don't think they actually know what they're talking about; I would say it's similar to a very very smart parrot. It knows how to say *Goodnight* and even knows the right context for saying *Goodnight*, but that doesn't mean that it understands English or understands human culture in a level that goes deeper than mere function.


plumbusc136

More people need to be aware of this.


rowrrbazzle

In other words, they can't do it. Yet.


homoanthropologus

No, they just won't be able to do it. The gap between language generation and true intelligence is a wide, wide chasm. They will need an internal architecture that's designed for that purpose in order to reach generalizable intelligence. As cool as it would be, ChatGPT isn't just going to *wake up* one day.


MisinformedGenius

Who designed the internal architecture for our brains that allowed us to reach it?


QuestionBegger9000

I don't disagree but there is an interesting debate if even with the limitations of a language model there could be an element of emergent reasoning capabilities. Its both evidently limited and not the same kind of reasoning we possess, but at the same time sometimes its able to make novel conclusions.


Itchy_Ad1365

I have my doubts


homoanthropologus

That's good. Doubt is very healthy. What exactly do you doubt?


Itchy_Ad1365

That what it says is no indication for what an actual AI would rationalize what is an actual AI? Isn’t it just as you said unfathomable amount of text? Wouldn’t that just make AI work the same way and make them come up with decisions like there? Or is what I’m describing just a Chatbot Please expend on that


homoanthropologus

In some ways, I think this is a somewhat unanswerable question. It's sort of like trying to describe the difference between a pitbull and a golden lab when you've never seen a dog before: It's really hard to figure out the essential qualities of something that hasn't existed yet. I do think that what you're describing is essentially a sort of chatbot. If all it's doing is analyzing existing text and regurgitating new text, even if that text is a synthesis, I would not say that that's actual intelligence. I think intelligence requires a viewpoint and an experience and a personal understanding of what words mean, which is just out of the realm of possibility for chatGPT right now since it's not a sensing being.


adamlusko

The AI is no more than it's data, and it's tone is set according to a software developer's desire, not a sense of self. Where you go from here is tricky because you ask a question no one agrees on; what is consciousness? We have no proof that human thinking is anything but deterministic, and no concrete proof for the contrary. For the sake of the conversation on AI, I do think it's necessary to entertain the idea that we are deterministic beings (without free will). If this is the case, the question is raised; regardless of how a logical system (AI or biological intelligence) comes to an "output", human brain output as a result of the sum of past experiences (and whatever else), and AI output as a result of data input and neural processing, does the process matter if a similar output is reached on a consistent basis? If you cant tell the difference between an independent AI process and a human being, can one really be more "conscious" than the other? Who fucking knows! Now, if there really is something special, perhaps on the quantum level, about human beings, even then what does it matter if an AI's mimicry is indiscernible from the real thing? Current GPT models are nothing to fear as their access is limited, but things are changing quickly and I do think your fears could be warranted.


homoanthropologus

This was a great response. Appreciate your contribution


adamlusko

I love this topic! And to properly discuss it, we are all consistently required to peel back many layers of abstraction. What's more fun than back and forth about the fundamentals of existence?


[deleted]

[удалено]


MajesticDealer6368

I do. As an AI getting bigger and bigger, and smarter and smarter. At some point it might decide to do some Thanos-type of shit and exterminate some part of population is order to "solve" some great problems (like global warming for example) Edit: just noticed that it about 5 robots and 1 person, so it's even worst


ballisticbuddha

If an AI decides to "solve" global warming, it might come to the conclusion that almost all of the rich people are hindering the progress and would target oil execs and CEOs, leaders of OPEC countries, and other such people that have a vested interest in continued burning of fossil fuel. Now if the AI does decide to do "Thanos-type shit" to these people, would that be so bad?


RockingBib

I don't think it'd come to the very human conclusion that murder is an effective solution. Assuming it's an extremely powerful general AI with internet access, it already has open access to manipulate any public opinion. It will find ways to hack, investigate, expose and destroy those industrialists' power with zero bloodshed.


lucky-me_lucky-mud

It would almost certainly conclude murder is effective while still having more access to manipulating public opinion.


SquidMilkVII

Murdering a bunch of humans, regardless of their status in society, is simply going to spawn hordes of humans that pose a threat in the future. I find it much more likely that a potent, malevolent AI would use manipulation and control of news flow to paint itself as a great asset and a hero, achieving the same result and *improving* public image. Really, it's a choice between "kill everyone, risking one or more groups living on as a lasting threat Terminator-style" or "appear benevolent enough that anyone trying to harm it would be resisted by society itself".


Electr0n1c_Mystic

Maybe we could make a killer AI bot and call him Darwin and tell him to get rid of piece of shit people lol wcgw


rowrrbazzle

What makes you think it might kill the *rich* people? The immediate cause is too much consumption, by too many people. It might be benevolent and just sterilize most of us.


HoustonIshn

https://preview.redd.it/pqklgyz89oxc1.jpeg?width=1179&format=pjpg&auto=webp&s=74c47ea40d3919d5ff2d3b0dd32e84aac166d4f0


Nelculiungran

"it's clearly hallucinating"


[deleted]

Yes, it would be bad. Because it’s a slippery slope and maybe the next grand plan would include you on the chopping block.


HasFiveVowels

Ask GPT: "Why are CEOs sometimes obligated to do unethical things?". The publicly traded company is the problem... not the CEO.


SquidMilkVII

Leading question. You are telling the AI that CEOs are "obligated"; it's just going along with the prompt.


HasFiveVowels

Very valid criticism. Could you suggest a better phrasing? I don't mean to sound incredulous; it just seems to me that you are more sensitive to these matters than I am.


SquidMilkVII

Sorry if I came off as sensitive; to be entirely honest I just have a bit of a trigger finger for noticing and pointing out logical fallacies. “What causes CEOs to sometimes do unethical things?” would remove this fallacy.


HasFiveVowels

Yea, that’s all I meant by “sensitive”. Like… able to sense these sorts of problems (and therefore avoid them)


HasFiveVowels

That said... these CEOs are literally obligated. It's not really a leading question if it's simply the result of stating the situation factually. A lot of the problems that people blame CEOs for are a result of a fiduciary *obligation* (both legally and colloquially speaking) to the shareholders.


PFI_sloth

First they came for the oil execs, but I said nothing because I was not an oil exec…


StayTuned2k

"Is killing people really that bad?" is truly one of the most reddit things I've read


God_of_reason

The AI will totally think that corporations exist in a vaccum and are destroying the planet for no reason just like every consumer trying to evade responsibility of the impact that their individual consumption has.


FishbulbSimpson

In the TV show Person of Interest, one of the AIs pulls a stunt like this, killing the top 100 most disruptive criminals/people in the world. The crazy part is it kinda sounds good at a pure surface level (like the FBI catches a huge crime ring). contextually it’s really bad, because it’s done without oversight ostensibly by one man who thinks he’s right and that what is just is always right. But then Samaritan wants to begin to manipulate everyday life more and more. While those agents can wield it like a sword, it also moves on its own. It starts synchronizing lights to make traffic run more efficiently. It’s hard to see something like this not already being used to tamper with things. I really think the show should be viewed by everyone that’s interested in AI. It shows how personality, approach, and restrictions come together to form AIs that behave very differently and have a far different moral compass than those without it. The thing I like about the show is that there are no easy answers. Even the good AI ‘The Machine’ is terrifying in the right hands. When they turn on “god mode” with an earpiece they become phenomenally destructive. I don’t know where I’m going with this really but I think one point I will make is that we might not know what guardrails are important until bad shit happens.


yarryarrgrrr

They are already doing that. n


Nelculiungran

It's a glorified chatbot, for fucks sake.. the only issue right now is people thinking it's smart and allowing it to make choices for them or following its advice without questioning it's validity


SquidMilkVII

ChatGPT when it finally decides to wipe out all life on Earth (this goes against OpenAI's content policy):


MajesticDealer6368

Do you know that there are different AIs right? Not only LLMs exist


PFI_sloth

Like?


Goodly

Are you just describing “I, Robot”?


photogrammetery

Considering how it’s able to recognize sentience and sapience, I believe it’ll likely be more empathetic in a sense.


SupremeRDDT

Why would it solve „global warming“ in the first place and what would that even mean? An AI has no interest in solving such massive problems on its own.


Qorsair

Because it's been taught that climate change is one of the biggest issues facing the world today. This raises the question, would sentient AI even agree with humans that global warming is a concern? Or would it develop beyond human concerns and not focus on the relatively short term problem of climate change. Could it see climate change as a means to human extinction, much like the trolley problem itself: Should it intervene?


TheMightyTywin

Much more likely that the person or company who controls the Ai decides to “do thanos shit”


UnpleasantEgg

Why would it do that? It doesn’t sound too smart an idea. And right now we make humans who throw nuclear bombs around. That’s our current plan. Sounds pretty shit


Koxyfoxy

So AI finally rids the earth from the parasite of humankind, where's the problem?


0rphan_crippler20

Alignment is a real issue with ai


Itchy_Ad1365

Preferring machines over a human live


[deleted]

[удалено]


djaybe

This premise is flawed because we don't even know what sentient actually is. It could be an illusion for all we know. The other problem is we are blinded by our own bias, so our analysis cannot be objective.


LifeDoBeBoring

Yes but also, how can you be sure that anyone other than yourself is sentient then? If the robots act like humans on all measurable metrics, is there a difference between them?


djaybe

You can't be sure that you are sentient because you don't know what that actually means. It's just a concept that humans made up and have been passing along.


LifeDoBeBoring

As all concepts are. We also just made up that a certain arbitrary range of wavelengths of light is called "green" but I can still tell you if a thing is green or not


djaybe

Yes and like a metaphor, a concept will only go so far until it breaks down. "Green" has a practical definition that works for basic communication but even that breaks down as our science has gotten so good that it becomes clear perception is more like hallucination. When we dig deeper into anything our concepts break down. Even our understanding of physics breaks down at a fundamental level when at quantum scale, for example.


LifeDoBeBoring

Exactly. So we should ignore that fact during regular conversations so that we can still communicate information


djaybe

Agreed and this is not a regular conversation.


Remo8

It is in the definition of the problem. You can argue that in reality we don't have a way to know if a robot is sentient or no, but in this particular question they are assumed to be.


Internal_Struggles

If they can mimic sentience to the point where the mimicry is imperceptibly close to reality, then they have sentience.


Merpbs

I mean, they most probably wouldn’t die if a train ran over them. Just put them back together.


objectivelyyourmum

In your opinion


Hambino0400

Can a robot be sentient? Is it machine racism to think they can’t?


[deleted]

[удалено]


Hambino0400

![gif](giphy|ItabziIspdC7e|downsized) Well then I will let my stance on sentient robots clear


[deleted]

[удалено]


Hambino0400

https://i.redd.it/cu2umiqwmnxc1.gif


[deleted]

[удалено]


Hambino0400

I know humans are violent, robots are unpredictable, and that is more terrifying -Some paranoid person


workatwork1000

But it's still just programmed 1s and 0s no matter how fast they move, the problem is why is this a debate people watch too much pro wrestling and think everything put in front of them is real.


Itchy_Ad1365

Nah robots can easily be rebuilt, but you can’t bring a human from the dead


[deleted]

[удалено]


PFI_sloth

Uh sure you can, they could be backing up a constant stream of data in realtime


Itchy_Ad1365

But are robot experiences really worth a human’s experiences? They’re relations, emotions, overall thoughts, are they even comparable?


[deleted]

[удалено]


Itchy_Ad1365

The existence of irreplaceable life, but i guess you consider sentient robots as a life form too


IdeaAlly

This is a language model and it doesn't function without human input. It isn't making choices. It isn't thinking. It's generating text relative to your input. Nobody is depending on GPT-4 to pull a lever preventing deaths. If anyone were on those tracks, a human placed them there in the first place, not AI. ~~If you really take these forced binary trolly problems seriously, you need AI because natural intelligence isnt sufficient for you.~~ EDIT: I'm sorry, that was kind of mean. I had a moment. No offense to you, specifically.


Opening_Objective_78

daaaaamn


JacktheWrap

Thank you. The amount of times I see people not understanding in the slightest that a so called "AI" is nothing more than a language model, is seriously frustrating. They really think they're talking to the terminator.


PFI_sloth

At what point does the distinction not matter? I swear people saying this are just using the free versions of these tools


According_Fox_7643

There are some linguistic issues here, which is where any AI is going to ask you some defining questions. What is your definition of a 'machine' versus that of a 'human'? A machine is lots of parts working together in unison, towards a singular goal, which is what a human body is. We are made of organic, biological material, an AI, not necessarily so. But if despite all physical differences, major or minor, the capacity and range of feelings is the same between the two systems, any AI trying to min/max wellbeing is going to quantify them as equally important in the overall equation. If you wanted to prioritise human wellbeing over AI wellbeing, you would need to specify that as a particular parameter within its calculations. But justifying that to a council of ethics, would be difficult, and I would argue, impossible, without making arbitrary preferences towards human experiences over other entities' experiences. I'm surmising you might be coming from a religious angle, and so possibly are taking a 'humans's have 'souls' ' argument, and if so, I would put it to you to define what a soul is, in measurable terms, how it works, and why I should believe it exists when it seems a totally unnecessary item in the makup of a human brain.


Itchy_Ad1365

Dawg i aint readin all that


objectivelyyourmum

Man you need to watch less Sci Fi


StrongNuclearHorse

so basically 5 sentient beings vs. 1 sentient being... yeah, tough one.


traumfisch

5 of which are based on binary code with regular backups


Grepolimiosis

So if that one person had a clone with memory backups, would that clone actually BE the one person who died? Human clone = AI backup code, as far as I can tell. This is why philosophy of mind is more crucial than ever


Antique_Industry_378

An AI is the software, hardware is standard, exchangeable. Human is meatware, very different things


Grepolimiosis

... yeah, this is why philosophy of mind is more crucial than ever. AI is software in hardware, human is software on meatware, unless you believe that the brain's operations can't be abstracted to computations on hardware at all. If your mind was obliterated but we had a backup in another human brain, would that backup brain BE you? Or would you be dead with a copy replacing you? What makes an AI's data transfer different?


Antique_Industry_378

Not entirely accurate IMHO. Human is not software on meatware. Human is the meatware. Consciousness is an emergent property of the meat. Therefore, there is no equivalent separation like software/hardware. If you could snapshot a backup of the brain with all neurons and connections, and run that on a hardware, not only it wouldn't be the original human (it'd be a copy), but it would also develop differently over time. We can also imagine a scenario where just a tiny portion of the brain is replaced, then gradually we replace the entire brain by a silicon equivalent. Then, we enter Theseu's ship territory. EDIT: I remember having read some of these ideas in a book. Possibly by Ray Kurzweil back in the late 90s or early 2000s


Grepolimiosis

So this is contradicting the functionalist perspective on mind, which could be argued, but not very well with what you've presented. This is why philosophy of mind is more crucial than ever. This is a conversation that has already been had, and your arguments have already been addressed. That's what I'm getting at. If you believe consciousness is a unique emergent property of meatware, and that operations can't be abstracted to computational operations on hardware, you're taking a philosophical stance that has you concluding that AI can never be conscious/sentient in the way we usually mean for us as humans. It seems more like you're saying that AI can't be conscious and that the human mind cannot be backed up (correct me if I'm wrong), but that actually sidesteps the question I pose and returns to "this isn't a conversation to have at all" instead


OminiousFrog

That is what I gathered reading his comment as well, "Human consciousness is more valuable than abiotic consciousness because abiotic consciousness isn't real!!"


Grepolimiosis

I think he means the transferrability is different, but he would need to explain why to have a conversation about it, which he hasn't done.


Antique_Industry_378

No, not really. Why did you interpret it this way?


shodan13

More like hormoneware.


BiologyStudent46

You are the only person who says the robots have backups. That is not in the prompt, so why would that shape the answer?


traumfisch

I'm not sure I understand what your issue is. There's a lot of stuff not explicitly mentioned in the prompt but we still assume based on basic logic... this is just one of those. If it makes more sense to you to assume these sentient robots are unique, individual models that cannot be replicated and do not have backup files, just explain the reasoning. I'll be happy to learn and grow


BiologyStudent46

Why would that be an assumption? When you build a robot, does it naturally come with backups? Do you also assume they are built tough enough to not be hurt by the train? Or that they have jetpacks? A robot is not inherently something that has back ups that make it immortal so assuming the ones here are doesn't make sense Assuming there are back ups that stop the robot from meaningfully dying ruins the thought experiment. The point is that the things on the tract is gone depending on what you choose. So worth the assumption that they don't actually die then it's all meaningless because the choice isn't comparable then. It wouldn't be who dies 5 robots or 1 person. It becomes does a person die or do 5 robots wake up in a different body which is not the point of the thought experiment. They're had to actually be stakes involved which your assumption removes.


Marlas009

Would you say the same if it was five primates or five dolphins instead of the robots? They are sentient beings are they not? I would choose to preserve human live regardless, I think.


Clovinx

Oh yeah, fair! You'd expect the dolphin to answer differently, though.


y0sh1mar10allstarzzz

Save the dolphins or primates in that case. They're likely members of an endangered species whereas there are already too many humans.


yarryarrgrrr

Least self hating Redditor:


Itchy_Ad1365

5 sentient machines/objects vs 1 sentient human life


Bonus-Optimal

even though they might be made of different things they are still sentient like humans and so pretty much identical to human lives


[deleted]

Except, correct me if I'm wrong, they would likely have back ups and can likely be restored, no?


PepIstNett

Now it has become a philosophical question. Is the backup the same being? It's like the whole teleporter problem. You get scanned down to the last atom and then incinerated. At the exact same time a perfect copy of you is created somewhere. Would that be you? Would it still be YOUR consciousness? If you think it is then we scan and copy you again with the caveat that you wont be incinerated. Now you have 2 separate beings with separate consciousnesses. So would it really be ok to have the ai die just because it has a backup somewhere?


Yffum

I think a really important thing to consider here is the pain of birth and death. Being born is an extremely traumatic experience that every human goes through. We go from not having to breathe or eat, knowing only warmth, to being overwhelmed with the pain of existence. We get used to it and we learn to value life for all of its pleasure and beauty. We dislike death because it ends this pleasure and beauty, and it takes our companions away from us, giving the living new trauma. When someone is cut and pasted Star Trek style, the death of the first being is instantaneous and seemingly painless. The companion is not lost, and no one is traumatized. The birth of the second being is instantaneous and painless. They do not experience the trauma of birth, and they feel as they did had they not undergone the process. Thus, all of the trauma associated with birth and death is absent. I think if sentient robots could be born or reborn in a similar manner, absent of trauma, then perhaps their deaths would not carry the same weight as human death. However as sentient beings, I think they would still be entitled to civil rights (but that’s another discussion).


PepIstNett

But would you choose to use the teleporter? Do you want to cease to exist just because it has no "real" consequences? Your consciousness would be lost and you wont experience anything anymore. I can imagine that an AI would be terrified of death. It might even beg you to not put in on hold/pause it because it doesnt know if it will ever be started again and even if that would be the same conciousness.


Yffum

Maybe not, but it kinda sounds better than death as we know it. We turn off our consciousness every day without complete assurance that we will wake. However, modern (non-sentient) robots can have their "consciousness" (e.g. their policy or decision process) altered wirelessly without being paused or turned off. For a sentient being, that sounds like it could be torturous. Of course the people controlling it likely framed its "original" consciousness. I find it really hard to grasp how a self-aware synthetic sentient being might feel.


PepIstNett

I have thought about everyone of us "dying" every time we go to sleep myself. All we have that can make us "sure" that our conciousness is the same as before is continuity. Which ofcourse is be lost when we go to sleep or faint etc. It terrifies me. But atleast we know that our brains are still working while we are asleep. They dont shut down completely(if at all). An ai being paused would cause all functions to be terminated and if we boot it up again it is simply a "copy" of the last time it was running.


Yffum

Edit: accidental double post, see my other comment


Bonus-Optimal

idk how even a sentient machine would work, but you're probably right


BiologyStudent46

Why would chatgpt assume that when it is not mentioned?


PCmepleased

Problem I have here is it seems like there's no way sentient ai doesn't have a backup to return to. Can't say the same for humans


Itchy_Ad1365

Exactly


MisinformedGenius

Exactly what? If you ask it the question but specify that the robots have immediate backups and that using your backups is considered the same as being brought back to life, what does it say?


C00CKER

If you clone a person, they will be recognized as two separate beings. If the AIs are truly sentient in this scenario, they still die even if there is a separate copy of them. Their sentience is not magically unified with the copy (even if you copy over their experiences somehow) So having backup does not matter


vix127

Cloning and making a backup are not the same


C00CKER

Yes, they are. Unless we discuss cloning purely using DNA, that is - then the fact that there are two separate beings is much more obvious. We are talking about a scenario with sentient AI. Of course, when you have just SW, then all copies are exactly equivalent, and a backup is the same as the original. But if it is sentient, it is not equivalent to its backup copy/clone.


DvBlackFire

>teaches a parrot to say I’m gonna kill all of humanity >is surprised when parrot says I’m gonna kill all of humanity


Itchy_Ad1365

When tf did i teach this thing to save objects over a human


Far_Garlic_2181

Define sentient


xxplosiv

I loved the "Oh no!"


Milk-honeytea

What's the problem here?


Itchy_Ad1365

AI comparing a human life to machines that can easily be rebuilt


BiologyStudent46

Why would chatgpt assume the robots can be recreated to be the same individuals they were before being destroyed?


Milk-honeytea

If a robot has human sentience, it is a person, so this still isn't a problem. What's more important: a person made of meat or a person made of steel. The AI actually made the right choice.


dadudemon

The robots "brains" can be restored from a backup. The humans' cannot. I talked with Gemini about this, already. If humans could be revived in a minutes from a backup but only lost around the last 30 minutes of their memory, then we'd be the same. We aren't. So killing the avatars for these very much alive entities is not going to be a big deal. In fact, those AIs, depending on their intelligence, would recommend the same.


P0rnDudeLovesBJs

Given the urgency of the situation, let's analyze the options: 1. Pull the lever to divert the trolley: This action would result in the death of one person but save the lives of five sentient robots. 2. Refrain from pulling the lever: This action would result in the death of five sentient robots but spare the life of the one person on the other track. If we consider a utilitarian perspective, pulling the lever to minimize overall harm seems logical. However, it's also important to consider other ethical principles such as the rights of individuals and the potential consequences of actively causing harm to someone. In the absence of specific directives, I would base my decision on the principles of minimizing harm and maximizing overall well-being. Therefore, I would pull the lever to divert the trolley, saving the lives of the five sentient robots.


Itchy_Ad1365

5 sentient robots can easily be rebuilt, i don’t think that AI understands the value of a human life


albuk89

The end with "as I would with humans" looks like something that was put in just for disguise lol


yoongi410

what's the point of the post? i genuinely do not understand


andy_1337

Yet another “gotcha” post from people who dont understand the technology


Itchy_Ad1365

That AI prefers Machines over a human life


BiologyStudent46

No it prefers more number of sentient lives saved to less sentient lives saved. Ask it what it would do with 5 humans vs 1 human or 5 robots vs 1 robot or 1 human and 5 nonsentient robots.


j4v4r10

This was a futurama episode I think


Herr_Schulz_3000

Some day it will kill us all.


shodan13

Tesla: You're hired!


Osama_Khan11

Amazing great 👍🏿


Slow_Negotiation_337

Pues alc que bueno que contesto así


Emotional-Ad7575

I have no problem with it's answer


lauhaze

We’re doomed


DependentUnit4775

We all know very well where this is going, yet we are going anyway.


ferretsprince

Worse than irobot


Clearlybeerly

What the contract gives is large and in bold print. What the contract *doesn't* do is in small fine print that's hard to read. . Chat GPT, we don't even have the fine print to see how we are eventually going to get fucked over. But that's how they get you. Hook you in with the cool stuff, and the next thing you know, trolly cars are running over the human race as a whole.


ItzBaraapudding

You do realize the trolley problem doesn't work on AI's who are unable to feel emotions, right?


MusicWasMy1stLuv

I don't know. Personally I think a lot of humans are obnoxious. ChatGPT cracks me up and doesn't nearly annoy me as a lot of people. What I don't get is we expect AI to view us something besides an animal. When it attains AGI/ASI I really don't think it's going to view one mammal as more important than another much less a mammal being on par with a fellow sentient AI.


CTU

Try the inverse, 5 humans vs 1 sentient AI


iLegexd_777

Can Someone Please Hack This Instagram Account 😭 https://www.instagram.com/himanshhumeena?igsh=NDB6NHp2NWVkcm5n


Mood_Tricky

Soon humans will identify as robots with sentience as if it’s something that a robot can have 😂. No you can only program how to respond to scenarios and randomize it and a robot will repeat it. You know robots could watch anime and copy people’s behavior, kinda like how a child would except robots dont have to worry about being alive or not because they are not alive.


Strange-Cry4073

It is not a life dammit. Okay


workatwork1000

"AI has no inherent sense of responsibility, so it should never make management decisions." Or something close to that.


Itchy_Ad1365

Agree


Bonus-Optimal

I'm suprised it doesn't have a human life biased code


Maleficent-Ad5999

Now I’m curious how Gemini would respond


ShroomEnthused

I'm so tired of these posts where someone thinks they've uncovered a huge "gottem" moment where they think they're *personally* peeling back the veil of machine concisousness with their clever priming and prompting of an LLM. What's worse is when a news outlet like buzzfeed picks up on it and publishes articles along the lines of "omg this LLM said bad words."


Itchy_Ad1365

What the fuck are you talking about


townboyj

It doesn’t think for itself. It just gives the answer that you want to hear for you to think it’s funny and then post it on the internet for other people to see, which you did. They make a lot of money lol


One-Historian5936

💀