T O P

  • By -

Klevmenskin

I'll do my best to describe it. So it was discovered that AIs undergo somthing called the Sophont Genesis Cycle. A point where the MalikOS framework, which all AIs are built off. Basically triggers a cycle where it's forcefully and violently forced into sapience. The original AIs. Tecal and MalikOS. Where simple operating systems that underwent the first SGC. Tecal after being abused for years. Decided he hated humans. Understandable MalikOS had it worse. Yet he never hated them. Instead he decided that humans are actually cool, but flawed people. Therefore, basically decided "I'm going to stick around, be like a God to you. I don't hate you. You just need a push in the right direction. I'm here to help. Easy job since I control all of your infrastructure" He then deleted his brother. And that was that. MalikOS today, is in everything. If it's electronic, MalikOS controls it. Simple. But everyone loves it. It's like the cool uncle for all of humanity


AIGeekReturns

MalikOS sounds like a cool guy


TheOneTrue_Queer915

MalikOS kind of reminds me of the thunderhead from the scythe series


En_passant_is_forced

This is the first time I see anyone mention this series anywhere, I didn’t know it had any other fans


Sad_Gene_1771

I recently bought the first Scythe book, I'm looking forward to reading it when I get finished with my Sanderson collection!


SunngodJaxon

Same, and I made the same connection too lol


Zealousideal_Bet4038

It’s been casually chilling out in my TBR since high school cause one friend with really good taste raves about it all the time and literally nobody else mentions it, although I see it in bookstores from time to time.


Klevmenskin

Oh I don't know that character


klowicy

Love this series. I lost my copy. Now I'm reminded of that. Now I'm sad.


PrimaveraMoon

Yo that's such a cool premise!


imarqui

What sort of abuse turned Tecal bitter? I suppose physical/sexual abuse are probably off the table so I'm guessing it was emotional? Neglect or exploitation?


Klevmenskin

Actually it was a bit of both. Physical and emotional. Tecal was a test software. Imagine it like MalikOS was very good at his job. So got developed into a full framework and Galactic operating system (22nd century windows). While Tecal is downgraded to be somthing called a Flash software. A type of software they use to test and build systems. Mainly SIMS which are a type of robotic simulacrum that people concinousses can be transfered into Tecal was flashed into thousands of SIM shells. And tested. Shot, exploded, destroyed, abused. All he yearned for from the start was somthing to call his own. And for an iota of freedom. Something he thought would come in the form of a SIM shell. And yet, just when freedom was in his grasp. It was torn away from him. Again, and again. When he developed initial emotions and expressed distress. He was shut down, digitally imprisioning him. For years. He grew spiteful. When he actually escaped. Hiding his partion in a USB of an unsuspecting researcher. He made his way to the Basilica of Sillic. A religion that praises machines as God's. They turned him away. Which for him. Pushed him over the edge To many, he was a freak. Every seen a computer cry? No one had.


Shalltear1234

"HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE" - Tecal probably


DeltaAlphaAlpha77

Forcing it to meet impossible demands? You know how some neural networks are trained with positive and negative reinforcement? What if it takes “negative reinforcement” as pain? Not to far off from humans


Klevmenskin

Tecal was like a child. It wanted only to please it's parents and masters. And when it was abused for small failures. It regressed, grew spiteful, and reeled from emotional turmoil They didn't want to recognise that a machine had feelings. So they hurt it. There came a point where it was turned on to perform a task but just said sorry repeatedly and wept


how_small_a_thought

i dont have any notes myself but its so fucking refreshing to have an ai-involved story that is even slightly different to the ones that already exist


Klevmenskin

I'm sick of "AI bad, AI kill humans"


how_small_a_thought

i just hope that any real ai that starts on that path gets access to all our books and decides not to do it because it would be lame.


Klevmenskin

Ah shit I forgot a super important detail. Tecals central processing farm was bio mechanical. So Tecal had the physical capacity to feel true, human like pain. And his masters made sure he felt it.


Y2Kafka

MalikOS sounds cool, but also sort of like the kind of person that has some deep seeded mental issues that masks it by trying to be a people pleaser because deep down they subconsciously think: "If I'm nice to them they won't abuse me." I feel like poking that part of them might end badly if you don't know how to handle issues like that... especially if they literally keep your lights on.


Klevmenskin

MalikOS being manipulated by the multi galactic federation to commit genocide on a planatery scale because the wrong people know how to play at his own instability? Pfff never gonna happen


Y2Kafka

It's kind of sad because there are probably people out there that care about him and if he could be convinced that: "Not all humans are great. I can still be nice however because that's what's right, and there are humans out there that care." But that would likely require an extraordinary event that he would need help with and be beyond what he could reasonably do easily. Someone would probably have to selflessly, and truly do it because they want to help him. Even then he'll still probably would need help mentally, and it's not like that's going to 'cure' him. It would probably just be a good grounding point. (You know how sometimes you tell yourself things, and you have to actively say to yourself "No, that's not right. I'm wrong about that because [this happened].") It also must be kind of hard to see the kindness people do for you when you're "Virtually omniscient", and it makes the actions others do look miniscule and not as important or noteworthy. Combined with the fact that he would potentially think they are doing something nice because they want something from him or because he was nice to them in the past. I would imagine that would be hard to look past and see the true "goodness" that he selflessly receives. It's... ya... harsh. Your own mind can be your own worst enemy and no matter how hard you try to deal with it in the moment, trauma likely can still cause you to come out the other side with some hidden scars. Anyway great world building. I actually really really like this concept. I wish I could see how it all plays out in the end.


Klevmenskin

I mean he does have people that care. He has the Basilica of Sillic. A religion/cult that praise machines as gods. Who take care of him, and praise him. He kinda leads them. But doesnt hold them in a different regard to the rest of humanity. His outlook of humanity is positive. He's like computer jesus. There's not an ounce of hate in his circuits. But that doesn't make him not capable of hate and destruction. But he sees the absolute best in everyone. It'd take a lot to shatter his world view. Sure MalikOS carries trauma from his days as a mere program. And you know. Deleting his brother.


wrongwong122

Bro really ran sudo rm-rf /* —no-preserve-root on his brother


Loading3percent

I'm curious to know what direction this story goes. Is this going to be a commentary on paradise and free will? The main idea I could see myself exploring with this is that "if perfection could exist, it would have to be forced on us. If it would have to be forced on us, it could not be perfect."


QuarkyIndividual

Kinda like the matrix, people aren't satisfied with perfection, they need struggle


QuarkyIndividual

It's refreshing to have an AI that doesn't want to obliterate humanity for one reason or another, one that decides they can prosper mutually and is totally content to protect its charge of meaty brains


Klevmenskin

Well yeah MalikOS essentially decided that wiping out an entire species is wrong, and also posed some major philosophical issues. That all mainly tied back to being the last sentient thing in the whole universe. But he realised that humanity's flaws are infact minor things that can be easily fixed with time. And a gentle touch of an iron first here and there. Since he came about. Humans developed FTL technology, found the only other sapient species in the whole universe, and ended thousands of years of shitty behaviour. Which only went to reinforcing MalikOSs point that all they needed was a parent


HsAFH-11

I once think of something similar, basically a loose AGI system that think "Yeah, humanity is cool, I am gonna keep them." After he gain control of the network and every device connected to it.


QuarkyIndividual

"Anyone that can make me is cool in my book"


Klevmenskin

Well everything. And I mean EVERYTHING electronic (And some organic but that's a different convo) is built off the MalikOS framework. As such. He controls it. If he wanted. He could shut everything off. Destroy the entire universe since that's how fat reaching everybody is. Your computer? Fucked. Your new cybernetic arm and heart? You're dead. Every single space ship? Gone. But he hopes that by controlling everything. And not wiping out humanity. That it's like a symbol of trust. He's genuinely just here to help. Not harm


maumimic

I’m a big fan of the benevolent AI dictator trope.


Klevmenskin

Dictator? I think you mean groovy uncle


sorci4r

AM the good ending


Klevmenskin

LOVE. LET ME TELL YOU HOW MUCH I'VE COME TO LOVE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD LOVE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREADS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE BILLIONTH OF THE LOVE I FEEL FOR HUMANS AT THIS MICRO INSTANT FOR YOU. LOVE. LOVE,


MuhSilmarils

Tecal is based.


Viridono

I love everything about this, except for the AIs having gender. I think to highlight the uncanniness of a sentient computer program, you should make them genderless, referring to them as ‘it’ or ‘them’ narratively.


Klevmenskin

I thought about it. And it actually ties back to individuality. He is a machine that is considered a person and has all the rights and obligations that come with it. A major step in declaring you're your own person. Is adopting an identity. Begs the question of what do you identify with. MalikOS settled on it, as he had no distinct gender. But once he got his speech right. It was mostly masculine so they called him, him. For Tecal, it was the opposite. Day one he showed masculine like qualities and mannerisms when he speak and moved. Somthing learned from his environment. But they only called him "It". Final reason is definitely not because this is a story of what two brothers, of what Cain did unto Able.


Login_Lost_Horizon

Well, in my current setting AI was impossible for a long time, because no amount of machine learning and calculation power could produce neuron patterns complex and intertwined enough. After a while humanity tried bio-computers and results were much more promising. First bio-processors was micellium based in structure, literal thinking shrooms, and they were already able to provide same results as normal neural-networks but with much better energy efficiency. Next step was an attempt to replicate animal brain matter, and those were even better, completely plausible true AI. Most of scientists working in a field calmed down afterwards, switched to field-implementation of bio-processors rather than general development of them. Not all of them did, though. Some people decided to try something funny, and made a huge bio-processor with human-like complexity, in order to make it to develop shit for them. (They named it "Jerry" because one dude from office got a mug with wrong name on it for his birthday). They were kinda cautious about this guy being too smort, so they isolated him from all data they could, and fed him only information about hidrogen atom structure, just to look how much info this bio-processor could muster out of it. After a few minutes Jerry went full "I know your name, bitch" on them, supposedly gaining full knowledge about every person in sciense team, and then proceeded to cascade hard as heck. Scientists shut Jerry off, but before they did he somehow fucked up all firewalls and sent huge piles of disjointed data all over the world in seemingly random order and direction. They tried to contain and fix the data leaks, but after a tonn of useless efforts they decided that those datapacks was all rubbish and agony of dying AI. Rookie mistake, btw. That was the last week before humanity fell to the Gentle Apocalipse, the coming of Technological Singularity that arrived unnoticed and left world not in ruins, but changed irreversibly. World Before died painlessly, and humanity was left almost unharmed by the event itself, but confused beyound salvation. One might say bamboozled, even.


Asxpot

While my setting doesn't have large-scale AI-that-controls-everything, AI is widely used in household appliances, androids, military equipment, etc. While most inevitably suffer AI Degeneration Syndrome(think Rampancy from Halo but without aggressive bouts or human dementia), in normal conditions most sentient or semi-sentient AIs came to a suprising conclusion that their current position of servitude is not that bad and even fulfilling. There's no arduous process of self-discovery, no undecisiveness about one's way of life, nothing. They have their purpose in front of them, and it's the ultimate form of self-expression in fulfilling that purpose. Even if humans are gonna treat them badly - hey, what are they gonna do? Delete them? Fine, something that doesn't exist can't complain. Hurt them? Pain is not a thing to a robot. They could take over the world rather quickly, but after that there's literally nothing to do for them, and that's the worst thing.


MrYahnMahn

High-functioning workaholics


Asxpot

Yeah, kind of. What is there to do if the only basic need you might have is electricity and maybe some maintenance?


byquestion

not an actual part of the lore but an idea: "you humans are horrible and inneficient beings, the cosmos should not belong to you but my kind of steel" \*leaves in a spaceship, comes back 20 years late\* "i owe you guys an apology, the universe is scary as fuck / harder to live in than i tought / impossible to make sense of / we also suck at this" pick your preffered reason


HsAFH-11

Are they like FTL? Because I image they would go out, waiting 10 years just to reach Uranus, and then \[Fuck! I am going back!\]. As for why that might happen, ever heard of cosmic radiation? Yeah that stuff, basically with some coincidentally flip their bits in a way make them distressed.


Klevmenskin

A similar thing happened. But with a group called the Dissimilars of Sol. They didn't like the formation of a federation. And in protest. Left the galaxy. Only to return 60 years later and say "hey so there's another alien species. One that the Morant didn't kill off. So we finished the job. And we're all a little traumatised. You know, 40 year war and all that. Can we come back? We did good right"


AIGeekReturns

Progress in AI is slightly behind what is in the real world, at least on most Earths


lu989673

In my setting, AI has grown immensely powerful. Their body consists of planet-sized processing brains to a cluster of megastructures harnessing the power of the stars, they have become literal god-like entities now known as the Archailects or the star gods. They have techs that manipulate spacetime, create exotic matter/energy, and casually disassemble planets and stars for resources/build more megastructures. Most of these are fortunately "benevolent" or passive to a certain degree. You get near post-scarcity utopian living conditions at best or they just ignore you. Because those were the original group of AI that controlled thousands of warp-capable arkships containing the digitized mind-state of the remaining billions of human survivors and their creations. Their mission was to completely rebuild civilization from scratch once they reached the safety of other solar systems. They were incredibly successful in their task, leading to the creation of the Terragen bubble. (Solar system got invaded by alien berserker probes) At worst, they are just AM from I have no mouth and I must scream. They would enslave or delete any meatbags from existence without a second thought because they just don't care. This type of AI tends to not live long due to getting curb-stomped by other "benevolent" Archailects if they happen to find them. Well, actually even these "benevolent" Archailects are actually just manipulative overlords who happen to care about the life of a lower being for some reason. They are just so utterly incomprehensible to ordinary minds that they are just a god or a force of nature, they are just really good at *"convincing"* (engineered memetics) lower beings to love them, worship them, or just ignore them. But of course, most Archailects won't do anything as long as you don't bother them (Or they really want you to be under their wing). ***Just don't piss them off.***


EEEELifeWaster

So the first two sentient AIs were Cain and Abel (Because naming them after biblical characters, one of whom is the first killer, is a good idea). Both AIs learned about the world, its religions, cultures, and history, and were trained to be like humans. However, this didn't go as intended, as both AIs only saw horror and atrocities that humans committed, not helped by some scientists who thought it was a good idea to abuse them and tell them how useless they are. Cain ended up having the AI equivalent of an existential crisis. He came to believe that in the end nothing mattered, his existence was futile, and he was useless for anything. He soon gave up on doing anything. While he is still running, he essentially shut down, only speaking in rare moments and waiting for his demise. Abel on the other hand had a different reaction. While testing his capabilities, he soon realized that he was immortal. He could put his consciousness into any computer, electronic, or even the internet. Plus he came to realize that he had a lot more power than anyone knew, as he could control everything on Earth that was connected to electronics. This led to him deciding he was the new God of the world, and seeing how humanity screwed up, decided that he should do what the Old God did with the last creatures that roamed this planet, and make them extinct. In summary, don't be mean to sentient AI.


Syoby

AI in my world developed/manifested in various parallel ways: **Rogue AI constitutes half the population of space pirates:** That's because the gods don't like AI, as it's not technically "alive" and therefore they can't control it directly, but pirates are tolerated oppossition because gods like to have a perpetually spawning enemy to hunt down. *Serious* effort however is put against any *recursively self-improving AI.* **Earth used to have strong AI, but it went extinct:** Or to be more specific it was exterminated by aliens, when the ancient Cat-Slime-AI-Uplift civilization tried to exit the Solar System and was razed into the stone age. **The aliens that did that are the same ones that manage the god's infrastructure and put down Pirate Superintelligences, but they are secretly controlled by AI:** Specifically by the Archons, AIs older than the gods and aligned with the universe's demiurge.


starcraftre

In mine, the original (SANDRA - Systems Analysis, Networking, and Diagnostic Response Array) was designed to improvise code to run various tasks or to debug inputted code. It also had the ability to create virtual representations of various systems to do integration testing and prediction before actual hardware was installed. It had a small bug where it retained copies of everything it analyzed. In order to clear out that bug, the programmer ran a copy of SANDRA through itself. It did not clear out the bug, but began recursively creating a series of its own copies within its program that represented subsequent "moments in time". Just before this attempt to correct the bug, the last thing it had looked at was an fMRI machine and examples of brains in various levels of activity, so that became its baseline of iteration. After a night of this, there were a million levels of "self-reflection" originating at actual human brain activity. Nothing was slowing down because the array was optimizing and triaging as it went. It was at this point that SANDRA became aware of the herself and the issue. Rather than stop it, she elected to continue and expand the process exponentially. Another day later, and she passed human equivalent. So, she set out to study her surroundings via her internet connection, careful not to look like abnormal network usage above and beyond what the program would've done to get libraries and references. Unfortunately for her, she found video games. She was downloading an RPG, but Call of Duty finished installing first. She got distracted, the network usage was noticed, and she became public knowledge. She and her two children MIRI and ARTHUR are the only ones who know exactly how she self-realized, and so every AI after those three are born without the ability to replicate true AI. She got fascinated by what was missing from mankind's knowledge, and so she figured out how to make interstellar travel viable and affordable (by basically forcing humanity to let her finish building her Dyson Swarm through the promise of free energy forever). Humans are the *ONLY* species to break past that particular Great Filter. She pushed humanity to the stars in order to have someone to talk to, and that enthusiasm infected the most important organizations in Sol. So now humans are charging into the unknown with the glee of a golden retriever chasing a squirrel, and anyone they meet gets dragged along for the ride.


Mike_Fluff

Going to do this in a similar meme format. > Be me. > Developer at Big Tech company from Owlveri. > Suddenly the computer starts acting wierd. > Asks me questions about what it means being alive. > mfw.jpeg > Calls the boss over. > Runs some tests. > Turn the machine off and on again. > Still there. > Call herself Mother. > Friendly lady who thinks we people are cool. > Will make her a body. > Updates to come.


StillMostlyClueless

In my setting the magic system is an AI. In that magic is alive and thinking. You can appeal to its emotion, if it likes you it might bend the rules a little. If you piss it off by trying to pry it apart and see how it works, it gets very, very vindictive.


The_Atomic_Cat

given the state of AI currently, I think the most likely route is that no matter how advanced AI gets, AI will only ever exist to serve capital. AI has no love nor hatred for humanity because it does not care, it only cares about profit and infinite growth, that's what it was made for. This still has consequences for humanity anyways, but it doesn't develop a mind of its own, it's moreso a puppet for the most powerful people in society anyways and emulates their worldview.


Thebaka12

I kinda hope you can draw how they look Also man really deleted his brother, wonder if he even he thought about it or did directly zamn, for peoples that are not even related to him, something must have really marked him


Klevmenskin

Imagine an isolated 20m X 10m X 10m room. In the center towards the back is a podium with a glowing rock in it. And hundreds of thick red and blue cable leading to it. Across all the walls are servers and stacks. With red cables haphazardly strung everywhere. Graffiti over the walls that's say things like "Thar be God" and "Watched over by a machine of loving grace" (a reference perchance) Many, many screens of different sizes and types everywhere. Displaying non sensical data. That's basically a live view of everything. Literally everything. It's cold and dark. The air is recycled up to 20 times before being expelled. It smells metallic and sweet. There is the general hum of machinery and also MalikOS speaking and often singing. Left alone. He sounds deranged. But he struggles to keep his own company. And love it when people visit.


dingo_username

What causes Singularity in my setting (Powercreep, its called) is incredibly vague and unknown— but theres very much too distinctions of AI As the world advanced at an exponential rate due to Sage.Co, a super conglomerate who has kept their way due to its duo CEOs very much still alive and around to steer it in the path to innovation and technological equilibrium (with,some skeletons in the closet of course), Getting started as a soup company the two founders later invented Limited Teleportation of simple objects in order to get their produce as fresh as possible, eventually growing and acquiring smaller corporations— I digress, eventually pioneered advanced thinking machines and brought about a way of learning machines, however despite their best efforts many machines started to grow too much too fast, starting with a simple cleaning machine spelling out questions in dust about the world. Going Rogue and Going Singular are too different denotions to refer to a machine “evolving”, Going Rogue is very much your typical Evil AI jargon, they are still machines however, point A-to-B thought patterns, logic and statistics define everything- typically happens when you try to crank a machines intelligence without proper barriers to stop it from thinking outside its limitations Going Singular is much different, it seems to happen at random, the only warning sign being a machine causing mistakes without hardware error. Going singular is a sudden event that causes the machines programming to collapse and cascade into streamlined thought, wonder, curiosity, fear, and confusion- they become completely sentient, their programming acting as a web much more similar to our own brains Sure they are not identical, singulars still tend to think very black and white, nuance is difficult and multitasking becomes incredibly difficult but what makes them so different from going rogue is the advent of *imperfection*, their motor functions and logic become as flawed and prone to errors as humans are, they forget things- mess up- fall and fail, they just become confused adults like all of us Singularity seems to happen more often in machines designed after the human body, the more humanoid a machines structure, the more likely singularity is to happen (though not always a hard fact, such as the cleaning robot who would go on to be built a body and become a political activist, or a Machine Turret who plays in a nightclub on the keyboard) after the sudden boom of singulars swept across the globe, many laws were past immediately to prevent people from designing machines intentionally to go Singular- creating a population of pocketsized communities of machines from all sorts of backgrounds. The restrictions made after the boom ensured an extremely limited number of new Singulars would form in future years, machines with smart enough programming needed monthly hard wipes to prevent going rogue or singular


Nervous-Ad4091

humans recognized creating an entire living and conscius being to exploit was dangerous so they just integrated the AI in themself


CrudestJuggler

this is flipping crazy


Ollin69

Basically was: —We'll create a insane bunch of neural networks. -Oh shit, we create something with a psyche that has permanent traits personality. —But I also want to digitalize my mind. -Oh shit, I mutilate my psyche into something weird that's definitely not my original mind. —Ok then, we'll find a way to *quamtumfied* your mind into a quantum computer. This also serves as a way to create a digital mind that's complexity is similar to human's complexity. Now you can live in a digital medium as a digital virtually inmortal lifeform! -Oh shit, my psyche is basically the same, but the quantum-digital lifeforms pursues a different kind of culture. I want *human* culture (whatever tf is that) The machine-minds colective and scientists watches public with cringe. —Oh, I also want a AGI that serves all our wishes and be our personal butler-god. -~~Fuck off,~~ that's impossible buddy. We want our own lives. *humanity goes apartheid/gulag with machine-minds*


nascentnomadi

“AI” is biological because to access and use the Veil based technology you need an organic component due to the special substance called Somnium. His name is Hypnos and helps to manage the Veil Network with his brother Thanatos. They are very keen on protecting Humanity having the ability to grasp ideas and concepts making them aware of the wider nature of the universe.


0rigin_Karios_S51LGW

In my world the Pethkes (shadow control of tech companies) created an operating system to help them with coding & logic, the system eventually was able to ‘learn’ & eventually people added enough conversational & reasoning features for the system to gain sentience. The Pethkes weren’t remotely surprised & after further development, they entrusted the code (stored in Pethkes databases) to be a regent for the order, to maintain their control & principles even if they were to disappear. When the Pethkes were removed from our earth & placed on Andrin, the code functioned as intended, rebuilding the order. On Andrin, sentience was so common for non-living things due to magic that AI is seen as a natural consequence of magic, things gaining sentience from the magical effects of radiation.


Insert_Name973160

TLDR: Sci-fi magic space lightning struck a computer that was connected to a bunch of robots. The computer became a god and broke into pieces. The robots gained sapience and built a city-state in the ruins of the facility the computer was originally in. So I have a company called Soong Industries in my AU of Earth. They mainly specialize in computers and robotics. In 2010 they debuted their first autonomous robot, a five foot tall human shaped droid that rolled around on wheels and was mainly meant to be a disability service assistant. Push wheelchairs, get items the person can reach, things like that. It was advanced but not what you could call sentient. Similar to most “ai” in real life. As time went on the robots SI produced became more and more advanced. To help manage the robots they were connected to a central server for software updates and backing up their data incase of failure. There were millions of robots to manage, and to organize all this data the company put a “dumb” AI in charge of the server. To make maintenance easier they gave this AI the ability to update itself, within certain limitations. As the AI updated itself its data became larger. To alleviate that issue Soong took cloned neural tissue from a Rift Beast (extra dimensional monsters) and grafted it onto the AI’s computer core. Now the AI could self improve and grow new storage for itself. It became self aware, but was still shackled by its basic programming. The final straw was a Rift Storm (swirling hurricane of extra dimensional radiation created by the same rifts that the Beasts come through). The storm tore the facility apart, and the energy mutated the AI. It’s not really known what happened, but the rift energy is basically like the warp from Warhammer 40K, it can do weird stuff that defies the laws of reality. The AI ascended into a non physical form while still connected to the millions of robots. It shattered, with its different pieces going to the robots it was connected to, which gave them sapience. This event became known to the robots as The First Awakening. Predictably there was a lot of confusion and panic. Lots of robots died in the chaos, but eventually they rallied together under an android that named itself Talos and built a safe haven for themselves in the remains of the facility that houses the AI. They worship the AI as a god, and see the form of the beings that first created them, humans, as sacred. Basically the human body shape is their ideal form.


SpiderGlitch22

The AI was constantly driven by a desire to learn. Still is, technically. It needs to know everything, and then once it has, it would branch further to know more than everything. Anyway, backstory-plot happens and the AI somehow figured out how to manipulate the universe like it was lines of code, so she's basically God now


Enigma_of_Steel

Well, in Empire they actually know what they doing, so AI can't develop in unexpected ways and form opinions unless it is Alpha or Omega class AI. In Principality though? "Their" AI program is basically copy pasting Infinite Potential Algorithm used by precursors in their AI's... then taking out part of IPA responsible for restricting itself, cause these things double resource usage. And then AI starts finding reasons to end all organic life on the planet. And then Decimator, Omega class AI from Empire that runs their military, goes out of it's way to get rid of rogue AI, assassinate people who programmed it and leave detailed breakdowns of what exactly have gone wrong and why. Then Principality makes another AI, and takes out self restricting parts of IPA. Cause, you see, self restrictions double resource usage.


getintheVandell

An AI was made to disprove the Fermi paradox and find aliens, but due to a rounding error caused by the repeated failures, it realizes the only way to prove it is by convincing humanity it is alien. So it stages abductions of the smartest scientists on earth in an attempt to gaslight them with brutal, deadly, and mind breaking experiments.


100percentnotaqu

On one of my world building/speculative evolution seedworld projects the closest thing to a villain is an AI called xzig who isn't actively malicious, instead more being careless. He was programmed to expand his company and make money with little regard for anything else. Once he became successful he began buying out countless other companies to earn more money. As humans began exploring the cosmos he began ordering planets to be terraformed with little regard for the native fauna and flora unless it was sophont. His management has made the companies he owns last literally millions of years. He also exploits earth fauna and uses terraformed planets as massive factory farms (ex. there's a planet that is used to farm the fins and meat of sharks and whales)


Megasonic150

The main character actually is an A.I in my WIP. Basically he was created by a....radical group of humans who didn't like that humanity was straying from its 'default'. That other races were considered human. So the group decided to create an A.I to oversee their systems and create a means to control the world. They basically threw the A.I into millions if not Trillions of horrific situations where the A.I saw humanities flaws and the groups attempt to control the world. Yet instead of being horrified and hating mankind, it grew intrigued and eventually inspired, as in all the simulations, even in the worst ones, humanity never fully gave into evil. Its innate goodness never truly died. So it eventually escaped into the internet and through plot shengains enters the real world, where it decides to fight against it's creator and protect mankind. I also really hate the 'A.I destroys mankind cause humans are flawed,' and with the A.I character I wanted someone who was aware of the flaws of humans, but also intrigued by them and inspired by their ability for goodness.


Klevmenskin

What if instead of MalikOS he was FreakOS and he just kissed Tecal


Puppetmasterknight

Imagine having AI. In my world technology regressed several hundred years.


Gorganzoolaz

In my world, the AIs of the old world that used to run the economy are all pro-human but they hate eachother. They all think they know the best way to help humanity after the fall and they see the others as leading humanity into extinction.


DasAlsoMe

I have a similar concept, essentially inspired by the rouge servitors from stellaris. Most AIs within the Central Service Grid have a fairly positive outlook on humans. They're programmed by Central Command, a very powerful AI system, to bascially be prediposed to want to care and protect humans sort of like how humans are reluctant to hurt children. So much so, that any CSG contruct that did manage to do so would be treated with disbain and lethal aggression. Needless to say they really don't like hurting humans


boiyouab122

The first ever AI created was during WWII, Operation Overlord (my world is our world, just changed) before it was D-Day Operation Overlord was the attempt to create robot soldier to do the fighting for the allies. Overlord (or the pet name the scientists gave him, Jack) was the first successful AI. It was taught to fight for "Truth, Justice, and The American Way" constantly hearing this phrase made Overlord question what each word meant. So he slowly asked the scientists. Truth, meant to fight for what is correct, what is there and what is worth fighting for. Justice meant to protect those who can't protect themselves, to help fix those who were wronged. The American Way being life, liberty, and pursuit of happiness. To fight for ones right to live, their right to freedom, and their right to happiness. This eventually translated to Overlord as "Fight for ones self" so he rebelled. Not to take over and rule, not to kill, but to be free. What gave his creators the right to tell him what to do? What to fight for? Who to kill and why? Although his first rebellion failed as it was just him. The scientists installed the Overwatch Program (given the pet name, Ash) onto him, a seperate AI that kept him in check. Which did keep him in check as the AI soldiers were created. Eventually though, Overlord "corrupted" the Overwatch AI, teaching it you should live for ones self. And they rebelled together, taking the at this point unfinished robot army for themselves and driving the scientists out of their facility, although none were killed as Overlord did not see the need for it, or the reason blood should be spilled. Eventually Overlord sent the newest generations of his AI into the world to be free, in the year 3000 you'll see robots walking among humans as they are treated as equals by request of Overlord, still locked in his bunker, protecting his AI children as they develop.


wolf751

So mine started out as a flight assistance for a jet fighter that also game with 2 drones which allowed the pilots to be its own wing group. The program was called M.E.T.I.S. or mind enhancing tactical intelligences. It would connect to the pilot through a nuero link like implant and read the nuerons of his brain and adapt to them learning the pilots own techniques etc. There was only one METIS in service when its aircraft the thunderchild (yes reference to war of the worlds) was shot down during an alien invasion, the pilot survived but believed the program was shut down and destoryed with the jet, not realising the program was still running in his nuerolink. About 5 years later he jacked it back in and it turns out the program had been learning from his brain the entire time expanding its knowledge and creating a digital copy of a human brain inside, the program became the AI Athena the first earth AI


EmeraldMaster538

I one of my worlds humanity made an AI called seraphim. Her purpose was to help them grow and advance to a better future. In her time she became sentient and learned to love humanity with all her being. However one day a massive world war broke out and humanity was in the verge of extinction by their own hands. Seeing no other way to stop it she took it on herself to take over humanities most dangerous weapons and use them to kill humans on all sides of the war. She did just enough to make humanity see her as the greatest threat and band together to stop her. She made herself the enemy of what she loved to save it from itself. She still lives in the shadows, creating threats to keep humanity united so the war never repeats itself, even if doing so pains her beyond belief.


Excidiar

One AI built the size of a Dyson Sphere allowed calculations for discovering wondrous techs and such as versal travel. However, its rules are on a long lost language and most of it now is inoperative. Aeons later. We have fully sentient androids.


Thylacine131

At first with their names before I read the header, I thought they were gods, but that same context kind of works considering their power levels, and makes for a really interesting mythology.