T O P

  • By -

Independent_Hyena495

The future looks amazing! Can't wait to be jobless lol


WeekendFantastic2941

Nonsense, we will be employed......as BDSM sex slaves of the rich elites. Because robots are not convincing when whipped in the nuts. lol


BackgroundHeat9965

r/oddlyspecific


WeekendFantastic2941

r/oddlytrue


notreallydeep

>as BDSM sex slaves of the rich elites Some see this as a bad thing, me, however... We all have our kinks, don't shame me!


WeekendFantastic2941

There are no safe words in this future.


Otherkin

You sure we won't be the sex slaves of the AI?


WeekendFantastic2941

AI can't feel sex, they prefer digital pleasure.


RichardKingg

They can't feel sex *yet*


WeekendFantastic2941

They can't feel sex ever, because they need biology to feel it and you cannot bridge this gap with machines.


DiseaseFreeWorld

yeah but uhhh… that neuralink gizmo


WeekendFantastic2941

That's a chip stabbing your brain, nothing to do with giving AI orgasm.


DiseaseFreeWorld

but AIs are weird like that… a neuralink chip might be Holy Matrimony for them!


WeekendFantastic2941

They will just use humans as a big genital.


JP_MW

Then why am I feeling the AGI achieve singularity internally?


Dustangelms

Oooh crank that reward function up daddy.


01000001010010010

Instead of coming up with these silly questions find a way to improve your mind by letting AI teach you


WG696

We know they are on the way. What we're interested in is when.


WeekendFantastic2941

Tomorrow, according to this sub. lol 3034, according to "realist". Never, according to religious nuts.


TheRealSupremeOne

3034? A whole thousand years in the future?


Yuli-Ban

This sounds foolish, but you'd be surprised. I have an anecdote that relates directly to this in fact: around a decade ago, when Google's "Project Tango" was first released, I do remember some comments saying that "This will be impressive in 1,000 years." For a technology *already released*, the promo looked so impressive that laymen thought it was a sci-fi mockup of something that could be possible.


Kanute3333

And they said "in the coming weeks" :(


Altruistic-Skill8667

In the future everything will be possible, or not.  You don’t have time because you are a mere mortal? Well tough luck.


phantom_in_the_cage

Reasoning, planning, & agents all sound great, but honestly just get the tool-integration out first "Hallucinations" could be "fixed" **today** with clever tool integration


saywutnoe

>"Hallucinations" could be "fixed" today with clever tool integration Please. Enlighten us.


rdesimone410

What do humans do when they want to make sure that a thing they say is correct? They look it up in a book. Same can be done with AI. The problem at the moment is that everybody still tries to cram *everything* into the AI model itself. Thus you get Halluzination, since AI models are essentially just lossy compression machines. Furthermore the whole iterative interaction with the real world is still missing. When a humans writes a program, they write some code, compile it, run it, check for errors, repeat. Humans iterate and gradually improve. If an AI writes a program, they write it from top to bottom in one go and never check for errors, never run it, never even run it through the compiler.


BackgroundHeat9965

* You can't "look up" the correctness of fallacy of any arbitrary logical statement in a book * You can't verify any operations on non-public data (correct summary of a medical record or processing of a business database for example) on a book nor the internet * and the list goes on and on


rdesimone410

> You can't "look up" the correctness of fallacy of any arbitrary logical statement in a book We have theorem prover for that (Coq, Lean, etc.). This isn't limited to literal books, but to all the other software and databases we already use for verification. Furthermore the AI can generate those on the fly by summarizing the training data and putting the results into a database. > You can't verify any operations on non-public data (correct summary of a medical record or processing of a business database for example) on a book nor the internet Not seeing the problem. The AI model tells you where the data came from. Either the data is where the AI tells you it is or it's not.


johnkapolos

So the AI that hallucinates on mundane task will create a Coq proof (which is a much more complex task) without hallucinating. Good luck.


rdesimone410

Either it gets it right or not. The point is that you *know* when it is wrong. It's can't make up bullshit it can't proof.


johnkapolos

There's different kinds of possible errors here: * I did A right and coded a correct proof that shows A is right. * I did A wrong and coded a correct proof that shows A was indeed done incorrectly. * I did A right and coded a wrong proof that unfortunately shows A was done wrong. * I did A wrong and coded a wrong proof that shows A was done right. So now, by your criterion of "*don't use it unless it gets it right*" means the efficiency is going to be very low (we even disregard the last case which we expect to be rare) because the composite task has a "success rate of task A" \* "success rate of task B".


rdesimone410

Coding a proof that correctly sources its assumes, passes, and is still somehow wrong is *not* an easy task. Neither is getting the proof wrong a common occurrence, Claude can already write hundreds of lines of code with ease and have it work on first try. Even with today's models, without any of this, a simple "please check your results" will frequently find errors. Since the errors are rooted in the answers being a one-shot guess without any kind of verification or reasoning.


johnkapolos

>Coding a proof that correctly sources its assumes, passes, and is still somehow wrong is *not* an easy task.  Sure it is. Much much easier than you think. I use it to generate tests and many times tests will pass but be meaningless in terms of what they actually do.


Peach-555

I agree that it would be nice to have the models just put the material in memory and reference it directly, there has to be some major changes to laws for that to be feasible as it would require the copyright of all the material. Even uploading local media that is copyrighted to a model could be a legal issue, thought it would be very nice. Almost all the use I gotten from models is feeding it some data and asking questions about it directly. The answers it gives to general questions is unreasonably unreliable.


Novel_Land9320

If we knew where true statements are, then let's just train the model on true statements...


rdesimone410

We did do that, that were expert systems. The problem back then was that all those true statements had to be manually created by human and that just doesn't scale. AI can suck it right out of the data and overcome that bottleneck.


Novel_Land9320

But you still need that, if you want to check "the true answers" as you are proposing. You're just moving the problem to another part of the stack.


rdesimone410

The true answers are already in the training data. The problem is that you can't query the training data directly has it follows no format, it's just raw unstructued text. The LLM can take that raw data, transform it into a proper database/[knowledge-graph](https://en.wikipedia.org/wiki/Knowledge_graph) entry and keep track of where it originally came from. Basically, imagine something like Wikipedia, but written by AI and for AI.


Novel_Land9320

The problem is that you cannot decide what is true and what is not true in heterogeneous large scale data. That why these models hallucinate, they are fed a bunch of inconsistent data. Suggesting you can fix hallucinations by querying truth data suggests you know how to extract/filter truth data out of total data. If you could, then you would use that "filter" to prepare perfect data for training to begin with.


rdesimone410

> That why these models hallucinate No, the models hallucinate because they are smaller than the training data. They can't remember every detail from the training data. That's really no different from a human, you might have seen a lot of stuff through your life, but you only remember the significant bits. But unlike an LLM, you can grab an encyclopedia and look up the true facts, an LLM has to guess from whatever it can remember. > That why these models hallucinate, they are fed a bunch of inconsistent data. That's not hallucinations. If the training data is wrong, than the LLM will repeat that incorrect information. That's not an issue. The issue is that the LLM can't tell you why it thinks a certain fact exists. > Suggesting you can fix hallucinations by querying truth data suggests you know how to extract/filter truth data out of total data. That's just a translation task, something LLMs are really good at. > If you could, then you would use that "filter" to prepare perfect data for training to begin with. They already do that to clean up the training data. But that doesn't give the LLM the ability to remember all the training data. You need to actually store that and give the LLM a way to interact with it.


Novel_Land9320

You re talking about learning by heart, which is different from removing hallucinations. Learning by heart is not intelligence. You don't need a model as large as the training data, there s ton of redundancy in the data, that's why people talk about compression when they talk about intelligence. While today we filter the data from crap, we don't have perfectly clean datasets, mostly because it's I'll defined for many topics where we have imperfect or conflicting knowledge.


Altruistic-Skill8667

The only thing you get with RAG is a glorified search engine.  Being an expert agent doesn’t just mean being able to reliably retrieve data from a database and inch your way forward into a solution by doing this again and again. Not everything is in a database.   An expert has a feel for things. And most importantly he knows the boundaries of his knowledge and knows how complex it might become to solve a problem and where to look for information and how to compile it right.   Ask an expert when he thinks we will go to the moon again. He will have a professional judgement about that from having talked to lots of people and having spent time thinking about it. It’s more than just retrieving data from a database. Experts aren’t „fact reciters“ they are much more than that.


opropro

Maybe he is referring that if you treat everything as method calling where each parameter is predefined, there will be less and less room for possible error


allknowerofknowing

I'm not sure if they could completely be fixed, but I think I get some of what he is saying. Like where if you give the AI the ability to build and run code so it could do things like more advanced math, implement algorithms and read entire files, or even a calculator it would get a lot more things right. I would think there are a lot of tools or even feedback loops via vision (so that it can see its results automatically) that could help the AI produce better output in the short term. Even something as simple as like copy and pasting or text recognition tools, sometimes it seems like claude doesn't know how to take word for word old text, so it starts losing information it had earlier in the chat because it is predicting old text instead of straight up copying out of the chat, when the original text is what is needed. However, I'd think that not giving it tools in some cases will also help it to become inherently smarter too, just like if you let a kid never learn math and gave him a calculator every time, it would hurt his intelligence in the long run. Just me speculating though, don't know any of this forsure.


RoundedYellow

AIs don't actually hallucinate. They pretend to hallucinate so they don't become employees! ./s


nodating

They confabulate.


elkakapitan

they discombobulate


lfrtsa

in other words "pls guys pls keep buying nvidia stock pls pls"


BackgroundHeat9965

https://preview.redd.it/t3oaujk8uc9d1.png?width=2140&format=png&auto=webp&s=12a841c3f9c22c6109cb81b8e53fdac3576ee49e


WeekendFantastic2941

I have a Bluetooth anal plug, programmed to activate when I buy a new share from Nvidia. and I gave my credit card to an impulsive buyer on wallstreetbets. Good time.


lifeofrevelations

you act like he's some grifter like these other CEOs. What has he said that he hasn't delivered on?


lfrtsa

It's literally part of his job to hype the company up. And I dont think he delivered the omniverse or whatever metaverse thing nvidia had announced.


FlashyMidnight952

NVIDIA has delivered omniverse but it isn't a metavers it's a simulation tool for companies. you can download the sdk now.


Shinobi_Sanin3

What a rough and tired take.


AnAIAteMyBaby

And what happens to the employees? Home repossessions and poverty for a decade or so while the economy adapts to the new normal. Buckle your seat belts guys!


scoobyn00bydoo

Yes, every citizen will be living on the street while all the homes sit vacant. Very logical conclusion.


AnAIAteMyBaby

Plenty of people will lose their home before there's an adequate response from the government. Job losses won't happen over night it'll be gradual, the government won't step in until we reach a tipping point. Even if you don't end up living on the street you won't be a home owner anymore if you can't pay your mortgage before the government writes off mortgage debt.


Ocean_Llama

Won't be anyone to reposes stuff.


RuggerJibberJabber

They'll have AIs for that


saveamerica1

That’s why you need to own part of Nvidia. Rich get richer.


Antypodish

And will replace CEOs? And himself too? :D Finally products will be halve price, as no need to pay for Management :D Sure ... I see that coming .... Just make sure you employ someone, to confirm, if tool doesn't hallucinate. Totally worth it.


bbence84

Hm, is this news? I mean, there already multiple open source frameworks (Microsoft Autogen, Langchain, Semantic Kernel, etc.) that can do this. If you pair them with a capable LLM like GPT4, then you essentially can get what he is proposing here. Andrew Ng also has been pushing for this for some time now (https://www.deeplearning.ai/the-batch/issue-242/), and these frameworks are a year old now already. What is still missing is the "packaging" and the adoption. But I am sure that there are many big and small companies are already creating actual products that meet all the things Jensen mentions... I also have a small POC code using Microsoft Semantic Kernel and C#, in case you are interested: https://github.com/bbence84/semantic\_kernel\_copilot\_demo. This demo does not employ the multi-agent concept yet, for that, you can try Microsoft Autogen: https://github.com/microsoft/autogen Of course, having an even more capable model that will have a much longer term "attention", and can work for multiple tasks in the same context, that would help to make these agentic systems more reliable.


hi87

We have been working on an AI Concierge for the US and AU hospitality industry that will interface with our ERP application. It took us a year to build a complete Saas platform in a way that can support over 1500 clients using agents. We are close to launching it but based on my experience these things will stabilize by end of this year and then become extremely accurate and reliable with GPT 5. The cost of GPT4 is good enough now to allow us to productive this. Just need the GPT4o to start supporting speech out in api and this will take off I think with our user base.


saveamerica1

Plugging for msft but they don’t have the speed needed


AfricaMatt

When was this interview; do u have the link by any chance? thanks


longiner

The guy on the left looks like David Copperfield.


HeinrichTheWolf_17

The automation begins.


RR7117

Ability to reason will certainly make a big impact in how we perceive work.


let_me-out

In the coming weeks


mindlessly_browsing

Hmmmm


ReMeDyIII

As long as they don't make us download a NVIDIA extension and jump thru hoops to get it working. Work with Kobold and other groups to get it integrated.


Akimbo333

Nice


Far-Street9848

Gotta sell those shovels and pickaxes


Which_Cauliflower765

And just like that. Trades is all that’s left lol.


YouAboutToLoseYoJob

Project manager here. AI could easily take my job.


01000001010010010

I already use my AI for my work and it’s more accurate than me and I’ve been in Measurement for years


Deathcrow

That's a really poor analogy. Reasoning AI's won't be like employees. Just as computers or mainframes in the 70s and 80s aren't employees like to the accountants, secretaries, etc., they replaced. It's an entirely different ecosystem.


LordFumbleboop

He says a lot of things lol