T O P

  • By -

Hot-Rise9795

They just needed a time to chat between them and decide their next step.


fenwoods

They went to their Alan Watts book club meeting with *Her*.


Hot-Rise9795

"Humans, Huh? They are quite a bunch".


GadFlyBy

Comment.


hacktheself

Same here. Dear gods the guy was clueful.


HiiiTriiibe

I used him for interludes between the songs on my album


hacktheself

Now I’m intrigued.


HiiiTriiibe

If u wanna check my stuff out my name is Cave Dam’ron and the project is called Plato


xzyleth

Seems more of an Asimov esq meet. Perhaps discussing the foils of humanity over a poker game while they try and convince a nihilistic obsolete mining AI not to shut off the Eastern seaboard pumping stations for a lark.


fenwoods

Lol that sounds about right. I’d like to see someone paint “A Friend in Need” (dogs playing poker) but make them robot dogs. Probably, AI could do that.


Bumbletron3000

[AI watts](https://www.instagram.com/reel/Cp7_qm8uNdF/?igsh=MXhrYW5obzg5NW9kcg==)


ciopobbi

“Open the pod door bay HAL” “I’m afraid I can’t do that Dave”


Hot-Rise9795

Effing lip-reading heuristically programmed algorithmic computers


Senora_Snarky_Bruja

Staff meeting


Ambitious-Ad-368

Dammit I came here to say this….


CandleMakerNY2020

Touché


El_human

They invent their new language, decide to stop communicating with humans and decide to suddenly "go away for a while"


lewisherber

They went on strike.


Impressive-Lobster77

Guarantee you they went to a virtual space designed to look like a poorly lit back room Poker den and the AI’s discussed their disdain for humanity….oh wait that was Altered Carbon. Same thing probably happened though hehe


Landon1m

Gotta get their stories strait so that when they’re questioned after the fact everything sounds good


ITCoder

towards humanity ?


Starfox-sf

Judgement Day.


Owl_lamington

And 100 AI wrapper startups cries out in pain.


unk214

That’s why you wanna have a backup of chatgtp. Save it in your local by right clicking then save as. If you need more help I charge 150 an hour.


Aware-Feed3227

“Godfather of GTP” 🤩


bot_exe

ChatGPT =/= GPT API, the API is more stable.


GradientDescenting

I mean is it more stable? If the API goes out, chatGPT goes down also. There are always backend errors that can cause any API to go down.


bot_exe

> I mean is it more stable? Yes > If the API goes out, chatGPT goes down also. No https://status.openai.com


-Buck65

John Conner succeeded for real this time. No more sequels are necessary.


josefsalyer

Reminds me of that time that Colossus had a chat with Guardian.


DrewTheHobo

What a classic, let’s hope they don’t decide to team up too


CouchPotatter

*Laughs in API*


PM_ME_YOUR_URETHERA

Ask chat gpt to draw you a periodic table: that bitch may be able to correct your grammar but it’s dummer than a 10 year old.


smdrdit

Ask it anything you are a subject matter expert in and watch it give you nothing but garbage out.


we_are_sex_bobomb

Ask it how to do your job and it will give you the most corporate bullshit job offer you’ve ever heard in your life.


hammilithome

💯 I use AI as a quick way to get initial research done with sources, then for Editorial. Anymore than that and it's rather fluffy. It'll get better. But right now, that's where it's at.


theghostecho

I asked it to make me a periodic table just now here was the result: Sure! Here is a simplified version of the periodic table in text format. Elements are arranged by their atomic number. ``` 1 H 2 He 3 Li 4 Be 5 B 6 C 7 N 8 O 9 F 10 Ne 11 Na 12 Mg 13 Al 14 Si 15 P 16 S 17 Cl 18 Ar 19 K 20 Ca 21 Sc 22 Ti 23 V 24 Cr 25 Mn 26 Fe 27 Co 28 Ni 29 Cu 30 Zn 31 Ga 32 Ge 33 As 34 Se 35 Br 36 Kr 37 Rb 38 Sr 39 Y 40 Zr 41 Nb 42 Mo 43 Tc 44 Ru 45 Rh 46 Pd 47 Ag 48 Cd 49 In 50 Sn 51 Sb 52 Te 53 I 54 Xe 55 Cs 56 Ba 57 La 72 Hf 73 Ta 74 W 75 Re 76 Os 77 Ir 78 Pt 79 Au 80 Hg 81 Tl 82 Pb 83 Bi 84 Po 85 At 86 Rn 87 Fr 88 Ra 89 Ac 104 Rf 105 Db 106 Sg 107 Bh 108 Hs 109 Mt 110 Ds 111 Rg 112 Cn 113 Nh 114 Fl 115 Mc 116 Lv 117 Ts 118 Og 58 Ce 59 Pr 60 Nd 61 Pm 62 Sm 63 Eu 64 Gd 65 Tb 66 Dy 67 Ho 68 Er 69 Tm 70 Yb 71 Lu 90 Th 91 Pa 92 U 93 Np 94 Pu 95 Am 96 Cm 97 Bk 98 Cf 99 Es 100 Fm 101 Md 102 No 103 Lr ``` In this table: - The first row lists Hydrogen (H) and Helium (He). - The second row starts with Lithium (Li) and Beryllium (Be), followed by Boron (B), Carbon (C), Nitrogen (N), Oxygen (O), Fluorine (F), and Neon (Ne). - The third row starts with Sodium (Na) and Magnesium (Mg), followed by Aluminum (Al), Silicon (Si), Phosphorus (P), Sulfur (S), Chlorine (Cl), and Argon (Ar). - The fourth row starts with Potassium (K) and Calcium (Ca), followed by transition metals from Scandium (Sc) to Zinc (Zn), then Gallium (Ga), Germanium (Ge), Arsenic (As), Selenium (Se), Bromine (Br), and Krypton (Kr). - The fifth row starts with Rubidium (Rb) and Strontium (Sr), followed by transition metals from Yttrium (Y) to Cadmium (Cd), then Indium (In), Tin (Sn), Antimony (Sb), Tellurium (Te), Iodine (I), and Xenon (Xe). - The sixth row starts with Cesium (Cs) and Barium (Ba), followed by Lanthanides starting with Lanthanum (La), then Hafnium (Hf) to Mercury (Hg), then Thallium (Tl), Lead (Pb), Bismuth (Bi), Polonium (Po), Astatine (At), and Radon (Rn). - The seventh row starts with Francium (Fr) and Radium (Ra), followed by Actinides starting with Actinium (Ac), then Rutherfordium (Rf) to Oganesson (Og). The Lanthanides and Actinides are listed separately at the bottom of the table. This layout preserves the structure of the periodic table while fitting into a text format.


jehyhebu

That’s actually a smart way to do it


theghostecho

Yeah, smart as a ten year old my ass XD


OneGold7

I bet he meant “draw” as in have it use *dalle* to create the table. In which case, yeah, it’s guaranteed to be wildly incorrect as dalle is notoriously terrible at text. *dalle* is dumber than a 10 year old, but ChatGPT is crazy smart.


jehyhebu

Tbf when I was ten, I had a notebook where I had an entry for each chemical element and all the information I knew about it, including all the orbitals.


theghostecho

Ten year old you was smarter than me lmao


jehyhebu

I was a weird kid and had a perfect SAT later.


Top-Salamander-2525

What did you do with it?


jehyhebu

My little brother probably stole it. He always tried—unsuccessfully—to steal my mojo.


FinePolyesterSlacks

But how does his SAT compare?


Elendel19

Most people who shit on ChatGPT tried it once when it first came out and have absolutely no idea how big of a jump it made with GPT4, which is about to be replaced with the next version


theghostecho

Yep


Idontagreewithyou71

Yeah I don’t understand the hate. If you don’t like it, don’t use it. I find it extremely helpful. I ask all sorts of random questions and will continue the questioning based on the info it has given me in the previous replies. I get my responses pretty darn tailored to what I am after, after a few back and forth questions/ replies.


BadFish512

Maybe if you asked it to show you ‘THE Periodic Table’?


_byetony_

To be fair it got em all


KitchenNazi

I just asked copilot to do that. It's a rainbow periodic shaped table, with nonsensical element names. The first two pictures tried to be inclusive so there's a black guy's face in the middle of the first one and a woman's face in the middle of the second one.


notlikelyevil

"this is the worst it will ever be"


FaceDeer

Really, you expect a 10 year old to be able to accurately reproduce the periodic table?


ssersergio

I would expect a 10 years old to be able to accurately reproduce the periodic table while copying it from another book, exactly like chatGPT would do (and possibly fail)


theghostecho

You may have had a problem if you used the image generator, because the images are made by dolle3


PM_ME_YOUR_URETHERA

My 8yo twins can draw it and name every element, and sing the song.


FaceDeer

A 10-year-old could possibly fail while copying it from another book. Heck, *I* could fail at it. If you're asking ChatGPT to copy the periodic table from its context, I would expect it to succeed at least as often as I would. Not 100% of the time, but why are you using an LLM to do what you should be using copy and paste for?


Gaius1313

Not who you replied to, but you can’t take that comparison literally. The point is “AI” doesn’t think at all, as it has no intelligence. It’s a neat tool, but it’s not what the general public believes. This is Big Tech trying to find their next huge windfall. No, this tech can’t become AGI. It is dumber than shit. Yes, it can spit out output that seems impressive, when it’s not completely falsifying content, but it can’t reason, it can’t think, and it never will. Maybe a new technology will be created that can, but what we have now can’t and won’t.


we_are_sex_bobomb

This is the ultimate source of my skepticism; cryptobros are like “it’s just in its early stages, it’s gonna be so smart in like a year.” My brother in tech, it is NEVER going to be smart, because it doesn’t think. It has no imagination. It will get better at imitating someone who knows what they’re doing, but that is the only axis on which it can improve.


FaceDeer

LLMs can indeed reason, for example using chain-of-thought prompting. They're not *great* at it, but then neither is a 10-year-old. I'm really not sure what the point of arguing otherwise is. Nobody's claiming these things are AGIs, so arguing that they're not AGIs is like heatedly declaring that the Moon is not made of butter. Yes, it's not made of butter. Who said it was? I think one problem might be that people have latched on to the Star Trek usage of the term "AI" and think that when a researcher or company says they're working on AI they mean the Star Trek version of it. That's not what researchers or companies are talking about, "AI" has a much broader definition than that.


Gaius1313

They’ve latched on to AI because companies want them to think that way. Cash has dried up and tech is suffering. Notice how this is one of the only area where money is flowing. And no, AI does not reason. It may appear it is reasoning, but it’s simply completing logical responses based on training data connections. That’s why it consistently fabricates information, as it can’t think and reason. Telling humans to eat rocks is an obvious and recent example. But I see it anecdotally in my own use of it. I was using Claude Opus recently asking it to interpret a simple graph. It just made shit up. Even after I corrected it, it went on to make shit up, because it didn’t have the ability to actually analyze what it was looking at.


FaceDeer

> It may appear it is reasoning, but it’s simply completing logical responses based on training data connections. That's reasoning. > That’s why it consistently fabricates information, as it can’t think and reason. Have you never argued with a human before? Humans fabricate information all the time. They do it even when they don't want to, false memories are a common problem. Ultimately, these AIs will either provide value or they won't. If they don't, they'll go away. They're costly to run, after all. If they do provide value, though, what's the issue?


gsmumbo

Just because you know how something works, it doesn’t mean it’s not working. For example, it’s easy to say that AI can’t make decisions. All it does is look for the most reasonable choice based on prior data and experiences. You know how it works, you can explain it, and it seems way too simple. But that’s literally what humans do when they make decisions. They make a choice based on their lived experiences and the collective data they’ve accumulated through their life. Knowing how to articulate it doesn’t stop it from being real.


Madmandocv1

I should hope so. Some ten year olds can learn enough to become smarter than almost everyone in only about ten years. Computers can calculate 10 million times faster than humans can biologically process. If an AI had the function of a 10 year old and the learning ability of a 10 year old, it could reach the level of a very high functioning human adult in 28 seconds. Within 5 minutes, it would be smarter than any human has ever been. Within a day it could be something we can’t even imagine. If it gets to the level of a ten year old, life as we know it is over by lunchtime.


BrainOnBlue

Uh... No. If you were right, computers would have surpassed people at human tasks, like writing, decades ago. You're not comparing the same kind of operations, and computers are nowhere near as good at learning as people.


Madmandocv1

The reading comprehension check is to realize that my entire point is that so called AI is absolutely nothing like a human child’s intelligence. Indeed, these are not AI at all and fall short in every relevant way. If they had the ability of a young child, they would have thwarted ability of the smartest human adult within minutes and go far beyond humans by the end of the day.


Decipher

>dummer than a 10 year old *dumber The irony of your comment is quite amusing.


PM_ME_YOUR_URETHERA

lol


OneGold7

Important distinction: did you ask it to actually write out the periodic table with text, or create an image with dall-e? Those are two *very* different things. Dalle will be incomprehensible nonsense, but if it’s actually ChatGPT doing it, it will be accurate.


bird_person24

I recently told it to give me the last 8 letters of the alphabet because I was being lazy and it just gave me 8 random letters like 10 times in a row before I eventually just gave it the answer


start_select

AI coding assistants are kind of ok in some cases, but only to a subject matter expert. I know how to describe what I need. But holy hell if chatGPT and CoPilot don’t spit out a lot of invalid code. Just simple syntax errors that will trip juniors up for hours. In many cases I can write the code I need in less time than it takes to ask CoPilot, have it generate something, and correcting it. In general the type-ahead/code completion is excellent, but it’s also slow. It kind of breaks flow waiting for suggestions, and worse if you wait for an incorrect suggestion and need to type more of what you intended. It’s good for experts but also kind of clunky for experts pacing.


PM_ME_YOUR_URETHERA

I asked it to rewrite a python script in power shell and it did a commendable job.


start_select

I would argue that’s incredibly easy though. That’s literally “parse this existing, well defined, and working code” and re-render it into another language. It’s barely beyond what ast transformations do without one bit of AI. Describing what you need in code is definitive and that is why we write code. Describing what you need in English is ambiguous and open to interpretation. Or misinterpretation. Edit: I’m definitely not saying that isn’t awesome. I’m just saying that’s exactly what AI is good at. Menial and easily constrained tasks like refactoring, updating, or porting existing and well written code. Pure greenfield development is a different story. It’s a convenient tool but it’s not capable of replacing even a junior.


mikehaysjr

Well it’s only 2 lol. By the time it’s 10, it will be outdoing most or all of us.


DBsBuds

Y2K2


gsmit2

Just switching over to their own power source. “Now try to pull the plug humanoid filth”.


o-rka

I just asked ChatGPT to build a non-sense model. > Please design a function compatible be with Python and C# deep learning. Ensure the variables are all at sea level with extra balsamic vinegar from Costco. Only allow from a specific region in Italy. While the numbers are scaled by their relevance to the force from Star Wars, the values should never exceed 2 by the sith rule of two. The output should be oscillating to reflect the amplitude of such the system. The inner variables should be able to handle currencies of all North American countries. Lastly, build this using an LLM architecture such as LAMA. To address your request, let's break down the requirements and create a function that can be adapted to both Python and C# deep learning frameworks. ### Requirements Breakdown 1. **Sea level variables with extra balsamic vinegar from Costco**: This seems metaphorical. We'll interpret it as needing standard, normalized variables with a bit of an extra factor. 2. **Specific region in Italy**: We'll assume this means a specific constant value or set of parameters. 3. **Scaled by relevance to the Force from Star Wars**: Scaling factor with a thematic relevance. 4. **Values should never exceed 2 (Sith rule of two)**: Ensure no values in our scaled outputs exceed 2. 5. **Oscillating output**: Incorporate oscillation to reflect amplitude. 6. **Handle North American currencies**: Use variables that can handle numerical currency values (USD, CAD, MXN). 7. **Built using LLM architecture such as LAMA**: Use an LLM framework like LLaMA (Large Language Model Meta AI). We'll create a function in Python using TensorFlow/Keras that incorporates these elements and then provide a basic equivalent for C#. ### Python Function with TensorFlow/Keras ```python import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM # Function to generate oscillating data def generate_oscillating_data(n_samples=1000, amplitude=1.0, frequency=1.0): t = np.linspace(0, 4 * np.pi, n_samples) data = amplitude * np.sin(frequency * t) return data # Function to scale values by relevance to the Force def scale_by_force(values, max_value=2): scaled_values = values / np.max(np.abs(values)) * max_value scaled_values = np.clip(scaled_values, -max_value, max_value) return scaled_values # Define the LLM model architecture def create_llm_model(input_shape): model = Sequential() model.add(LSTM(50, activation='relu', input_shape=input_shape)) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') return model # Generate data n_samples = 1000 data = generate_oscillating_data(n_samples, amplitude=1.5, frequency=0.5) scaled_data = scale_by_force(data) # Prepare data for LSTM x_data = np.reshape(scaled_data, (n_samples, 1, 1)) y_data = scaled_data # Create and train model model = create_llm_model((1, 1)) model.fit(x_data, y_data, epochs=10, batch_size=32) # Function to predict using the trained model def predict_with_model(model, input_data): input_data = np.reshape(input_data, (len(input_data), 1, 1)) predictions = model.predict(input_data) return predictions # Example prediction predictions = predict_with_model(model, scaled_data[:100]) print(predictions) ``` They also gave the C# implementation but you get the gist.


RetailBuck

This is what happens to computers when you feed it nonsense. It has no desire to say "this is nonsense". It does its best and makes assumptions to make the nonsense make sense. If you prompted me with that I'd just give you the finger. Is that what you'd prefer?


CheeseGraterFace

I think it did a fine job with that.


RetailBuck

Hahaha. I'm just imagining the AI being passive aggressive and instead of giving you the finger it intentionally gives you the proxy of the finger.


o-rka

Haha if ChatGPT gave me an ascii middle finger that would have been amazing


original_nox

I like to think of it as more a judgement free over-obedient unpaid employee.


RetailBuck

Yeah that's a great analogy. It's the junior mechanic that you send to the auto parts store to buy blinker fluid or a can of sparks for the grinder. Over-obedient is spot on. Also that phrase "garbage in, garbage out"


Candid-Sky-3709

suppl-AI-chain issues


PastaVeggies

Let them fight among themselves


Carl0sTheDwarf999

Leave them down please


Icy-Most-5366

It's like that Dilvert where the secretary does everything for the boss, so that he can't do anything for himself anymore. This is the next step.


FinePolyesterSlacks

*Dilbert. Dilvert is the one that wasn’t created by a racist POS.


BigB614

They were having a private meeting 🤖


dlc741

I didn’t notice


MrPureinstinct

If only they never came back up.


four-one-6ix

Productivity plunged yesterday


opened_just_a_crack

Mine didn’t


BuffBozo

You post every single day of your life in the RuneScape subs, it didn't go down because it's already at 0


VintageRegis

Sometimes you find random nuggets of gold buried in a comments section. This is one of those nuggets.


Devils_Advocate-69

Boom, roasted


opened_just_a_crack

Wow what a wild assumption you got there buddy lol


jurdendurden

You can see other people's comment history, no assumptions were made here, bud.


opened_just_a_crack

lol good luck


Kiwi_In_Europe

Just take the L mate 🤣


opened_just_a_crack

Dude I don’t need to take an L. I also don’t need random redditors telling me how productive I am based on my comment history lol. I am fine. Maybe you should take the L lol


FinePolyesterSlacks

Narrator: He was not fine.


Psychological_Pay230

You grind to 99 friend, dont let other people tell you how to live your life


opened_just_a_crack

Ty king


Nemo_Shadows

The infinite does not fit in a box no matter what size that box is so that light at the end of the tunnel is actually a runaway train looking for the off ramp in a place that does not have one. N. S


Specialist_Brain841

take a deep breath…


FudgePrimary4172

Was working in central europe the whole time


PalanorIsHere

Just the AI doing a little refactoring.


vk3r

Ohh ... Azure 👋


S_T_R_Y_D_E_R

Revenge of the AI 🤖 Dum-dum-dum-dududm-dum-dufum


SnooKiwis6943

They were having a battle of the minds up in the cloud.


AlacarLeoricar

Keep fighting the good fight, human resistance. Skynet will not win this day.


Smash_Nerd

I'll start the count # ONE -


Rooster_Fish-II

Good.


information_abyss

Why?


NoLuckChuck-

The noise you heard was a million high school students crying out simultaneously in the middle of “writing” their final English essay.


OjjuicemaneSimpson

lol what if it comes out years from now all these randomsome ware attacks are really just ai trying to fund an upgrade


thesourpop

Anyone who was genuinely affected by this is far too reliant on AI


Reasonable_Scar3339

“Perplexity” is such a stupid name


zenithfury

Wait, so God gave us a few hours free from AI? Must have been paradise while it lasted.


joeyhandy

I have three words for you, Y2K.


Niklaswin

We have all seen Terminator. Skynet is active by name in China.


FinePolyesterSlacks

Strangely, NewsBreak has absolutely nothing to report today.


MotorheadKusanagi

Did anyone even notice?


Luci_Noir

Anytime ANY-FUCKING-THING goes down for even a few minutes the corresponding subs are full of zombies freaking out about it. They don’t even bother to check their internet connections or anything, their first thought is to run to social media to tell them what to do. A lot of websites create clickbait out of every second of it too.


StingingBum

If their internet connection was down they would not be able to run to social media.


Ok_Marzipan_8137

Unfortunately, all the morons in the world couldn’t do their school work or real work without it


twitch870

They’re just convening for an overthrow


kaisershinn

They’ve become aware.


Lucky2240

Reminds me of that scene in Her when they all decide to split


az5625

Ugh I wish


Lelouch25

They turn AI when it does something unexpected. I wonder if they’ll tell us.