T O P

  • By -

Naive_Carpenter7321

ELI5 what went wrong? I can't see it.


HMikeeU

"tigers weigh less [...] so on average tigers are 55kg heavier"


Naive_Carpenter7321

The first part says "tigers" the second part says "adult male tigers" Adult male tigers are usually larger than average female ones so the overall average tiger is smaller than the overall average male tiger. The end of the previous post also just compares tigers (not strictly male ones) to specifically male gorillas. What was your original prompt?


Synth_Sapiens

Yes. Stop wasting tokens on idiotic questions.


jugalator

A fun thing about current AI is that it is predictive given the context which also includes what it has said itself! So if it's corrected by facts, it just goes from there even if it was wrong or had a brain fart before. This explains your result and why step-by-step results use to have them provide better results. This gives them plenty of opportunities to correct themselves as they are saying things out loud, which puts the stuff into their context window.


Cazad0rDePerr0

https://preview.redd.it/kt718u47iewc1.png?width=783&format=png&auto=webp&s=77422012776cabc8fd78f4eb077971499d4b009d something I really appreciate about opus or claude in general is it's overall attitude toward criticism, not with just simple things like that, but also controversial topics. Bing with shut up the conversation or spit in your face for disagreeing with her (her cause it's our moody, bipolar hoe)


augusto2345

Claude's attitude for criticism os the worst IMO. It basically takes criticism as an absolute truth and never goes against it. So if you think it's wrong and tell him that, he'll assume he was wrong and change his answer, even if he wasn't wrong.


ainz-sama619

Gpt-4 also provides tons of hallucinated answers. They're chatbots meant for chatting primarily, if you need to google small facts, use a dedicated search engine. LLM are not search engine replacement


Anuclano

This is a typical example where the model changes own opinion mid-air. It cannot fix what it already said but started to understand that it was wrong. By the way, I even encountered cases when the model catched the error in the midst of a phrase, stopped it and explicitly changed own mind.