T O P

  • By -

startupschool4coders

I think that there will be an AI Bust like the Dot Com Bust and, similarly, a few years thereafter where AI climbs back to a more practical, less hyped use.


pag07

The companies and business models: will bust the technology: will stay


Which-Tomato-8646

But the research funding will go. No hype,  no money 


wot_in_ternation

The grift/hype "research funding" will go, the same big companies that have been putting big money into this for at least a decade will continue doing so, especially when there's now viable business uses for this stuff.


KevinCarbonara

> I think that there will be an AI Bust like the Dot Com Bust There *might* be an AI bust like there was a blockchain bust. Which is to say, zero impact.


Better-Internet

Like a lot of other things, it'll likely settle down into a useful everyday tool. In the early 1990's there was huge hype over the "paperless office". The hype died down. But lo, 15 years afterwards the office did become mostly paperless. AI/ML does have some practical value. Crypto, especially proof-of-work is mostly garbage.


ChandeliererLitAF

Gartner Hype Cycle


UltimateInferno

Yeah. This is why honestly a lot of my specialization studies/certifications I'm working on rn is cybersecurity and cryptography since that's an industry that I feel like not only won't go away, but may be experiencing some major shakeups. With everyone looking one way, I'm looking the other


DirectorBusiness5512

AI winter 2 incoming [I'm already reading stuff that openly wonders about whether or not LLMs are beginning to hit a point of diminishing returns](https://garymarcus.substack.com/p/evidence-that-llms-are-reaching-a)


boredjavaprogrammer

dont think AI is going into another winter. It is here to stay. It has been used for thousands of applications from your reccomender to healthcare. But the recent hype- that AI can basically overrule humanity and startups slapping AI to anything and raised millions of dollars preproduct, might dir down. LLM has its uses - from code completion to helping to think of words. But it doesnt seem to lead to Gen-AI


dwightsrus

Hype cycle typically helps build the infrastructure that gets used in the long run. With the railroads in the 1870s and Fiber optic cables under the ocean during the dot com, apart from an intermediate bust cycle, they helped lower the entry barrier when the dust settles and more sustainable use cases emerge.


[deleted]

[удалено]


AutoModerator

Sorry, you do not meet the minimum sitewide comment karma requirement of **10** to post a comment. This is comment karma exclusively, not post or overall karma nor karma on this subreddit alone. Please try again after you have acquired more karma. Please look at the [rules page](https://old.reddit.com/r/cscareerquestions/w/posting_rules) for more information. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cscareerquestions) if you have any questions or concerns.*


[deleted]

[удалено]


AutoModerator

Sorry, you do not meet the minimum sitewide comment karma requirement of **10** to post a comment. This is comment karma exclusively, not post or overall karma nor karma on this subreddit alone. Please try again after you have acquired more karma. Please look at the [rules page](https://old.reddit.com/r/cscareerquestions/w/posting_rules) for more information. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cscareerquestions) if you have any questions or concerns.*


brandall10

3rd AI winter.


boreddissident

It’s a useful technology that is being way oversold.


Left_Requirement_675

Very useful for scammers as they dont need information to be correct or accurate at scale.


boreddissident

Useful for all kinds of situations where 100% accuracy is impossible and 90% accuracy is acceptable and perhaps impossible to achieve by other means. We do big data with really fuzzy data. We get a series of documents that are all over the map in terms of how they're written, formatted, etc. in a set and we have to classify them based on what sector of the economy it sounds like they're talking about and other sort of stuff like that. And we have hundreds of millions of documents. Setting that task to a specialized, proprietary LLM has been a giant leap forward in terms of accuracy compared to text analysis algorithms that were previously available. Or so I'm told, I'm not on that team, but the head of the data side of our operation is smart as shit and is very skeptical of all the BS hype around AI. But that's not the kind of things the tech bro hype beasts are pushing it for, it's a targeted data science application where the technology makes sense.


jm9160

I say this will not lead us into the Ai-age, but the age of the critic, where critical reasoning to contextualise information will prove to be one of the most advantageous skills.


lab-gone-wrong

I agree with this perspective. A number of previously commoditized fields (eg and especially product/service reviews and journalism) will, hopefully, de-commoditize. There's a decent shot they return to something resembling glory as the free-tier is further polluted with SEO-optimized generative AI nonsense. Of course there will always be *some* market for free junk. But even among the free-ish tier, there will be demand for markers of quality/usefulness that set them apart, because the free/open well has been poisoned in a way that will likely prove incurable.


Trakeen

Think it is being undersold long term. Near term a lot of these companies will disappear like the dotcom crash. Very similar, internet made a huge societal impact but also killed a lot of companies who were either to early or not thinking long term enough


LeFatalTaco

It feels remarkably similar to the Dotcom bubble. It's hard to find a metric for exactly how many AI centered startups venture capitalists are pouring money into, but I can only guess it's rapidly increasing.


zerovampire311

A few companies will leverage tools that enable massive productivity gains with the processes to refine generative products. A few companies will walk away with the market and many will play in the shallow end of the market.


Thefriendlyfaceplant

That's true. A lot of these 'talk to your pdf' startups aren't really creating a product but merely build a wrapper. But all of these companies vanishing doesn't mean the bubble has popped. It merely means their market share consolidated in a product that does it all.


boredjavaprogrammer

I mean dot com did popped. “Bubble” here means that companies raised a lot yet not able to achieve what they promised and they ran out of money. So companies collapsed. In this case there seem to be an AI bubble, it might burst soon, with companies collapsing. Like in dotcom. But there are use cases that survived


ultraswank

And like the dotcom phenomena, I think the real big effects will be things we aren't even talking about right now. I mean, I think you can make a real argument that the rise of the internet produced the conditions that made the January 6th insurrection possible, but there's no way I could comprehend that the first time I fired up Netscape Navigator in 1995.


Trakeen

Completely agree. Remember when amazon was really just about selling books? Smart phones were also very disruptive. Thought the iphone was really dumb when it came out but here i am writing this post on one


top_of_the_scrote

Also a bottle of bath water for $5K


Anxious_Blacksmith88

The real result.... Is the death of the internet and the collapse of digital markets.


the8bit

If say more "the wrong applications are over hyped". Image gen, customer support, etc it is ok at but the commercial applications are limited. Summarization of meetings / docs, knowledge discovery, code completion etc are all hugely impactful advancements, but they are not "shiny" so they don't get as much press


IAmYourDad_

Just like the Dot-coms back in 1999.


boreddissident

Everyone could tell the internet was a big deal, and they were right, and the productivity gain of the technology was huge, like immediately. Just going from no email to having email was a leap forward. Just having a dot com where people could easily get some information about your business was transformative. Everyone knew the future was somewhere in this dot com stuff, but nobody knew exactly where so they were just investing money at anything that seemed legit. And then there was even more money and nowhere sensible to put it, so like, mail order dog food companies were worth billions and having superbowl ads and stuff. Crazy crazy times. I was in high school. And I think that’s the AI thing. Industries that run on a little bullshit are already using AI everywhere. My sister is in advertising and she’s at a good agency and the work they do that the public sees is still written by people, but the internal stuff? The pitches and pdfs they pass around and everyone just skims? All AI. They’re already reorganizing and there are basically no junior copywriters anymore. AI is legit, but I don’t know about most of the current businesses and startup ideas. I think the google of this technology hasn’t emerged yet.


Dr_CSS

The problem is the AI singularity where ppl stop making content and rely on the AI, to the point where the AI has no new content to train on and starts feedback looping on other AI generated content, and with each iteration the output becomes closer to slop. Hell, it's *already* happening to software, specifically stackoverflow because more ppl are using GPT instead of asking the assholes on SO. I do it too, because GPT is simply better usually, but as new problems and tech emerges, but real answers for it don't exist on the internet, the content AI generates is more and more of an approximation leading to said slop


boreddissident

I don’t think the future of AI is just vacuuming up the internet. That’s round 1, before copyright lawsuits change the landscape, The future of AI belongs to people who own unique, high quality training data


Dr_CSS

I agree, whoever holds the data holds the most powerful bot


DielsAlderRxn87

My manager asked me how I felt about the new AI we’re getting. I told him it’s cool, it’s basically just a faster way to google stuff but it’s wrong a lot. He seemed confused


Complete_Swing2148

I feel the same way, I love using chatGPT as a fast google but every time I want a LLM to ingest complicated information (ex: document spaghetti code) it simply can't do it well.


horatio_cavendish

I often encounter situations where it's wrong, it acknowledges that it's wrong, and repeats the same wrong information rather than saying it doesn't know the answer.


Skurry

"The answer is *X*". Err, I think you're wrong. "Oh, I'm sorry you're right. The real answer is *X*!" WTF?


Hikingwhiledrinking

This is often my experience, except it will typically give an *even worse* answer before repeating *X* as the answer. Or the: “Is there a way to do *X*?” “No, but you can do *A, B, C, and D* which would lead to the same results.” “What if I just did *X* this way?” “Apologies, you’re correct and here’s why that would work.” Appreciate your help.


Mistredo

That makes sense from the way LLM works. You said it’s wrong, so it generates tokens acknowledging that and tries to generate another answer, there is no other answer available based on the context, so it gives the same. There is no thinking in LLM.


Dr_CSS

Yep, it's a probability machine, not real AI


Dr_CSS

Yep, the bot isn't a thinker, rather it's a probability organizer. It takes words which it sees go together often, and gives you an answer based on what real humans have said in whatever document is in the training data. I tried to use it for a materials science research project, and it straight up invented fake academic papers and links to make me believe it.


rynmgdlno

ChatGPT specifically (3.5 or 4) is so apt to be apologetic that asking for any clarification makes it assume it was incorrect (it might have been but that's not another point) and redo all of its work in another, worse way. I've been trying to use it as a calculus tutor and I'll ask something like "explain your use of the chain rule in step n" and it will respond "youre absolutely right, I should have not used the chain rule in step n. Lets try a different approach" and proceeds to shit all over the page lol.


Dr_CSS

You want to ONLY use 4 for math, 3.5 is truly dogshit. 4 actually helped me and my friend through Verilog (we did not go into the field) and Control Systems (it actually explained the advanced calculus concepts and we smashed that class)


renok_archnmy

Goddamn all day every day it does this to me.  Here’s some code. I use it, throws errors all over. Tel chatbot the errors, it gives me the exact same code.


ejectoid

It is the opposite of Siri. - Siri, how is the weather today? - Sorry, I didn’t understand - chat gpt, what vaccines are proven to be effective for unicorns? - (fills the screen with garbage text) It just can’t say “no”, it was trained to vomit text


JaneGoodallVS

I find it useful translating functions between different languages. But it's really bad at even writing basic unit tests. Like, really, really, really bad. I also asked it something I'm knowledgeable of and it got three things blatantly wrong. Should I trust it to tell me things I'm not knowledgeable of?


jambox888

> it's really bad at even writing basic unit tests I feel attacked by this


rynmgdlno

us 🤝 chatgpt


jambox888

When it can come up with self-serving justifications for not bothering, it can truly replace us.


renok_archnmy

I was playing with it trying to get it to navigate simple mazes. It failed miserably. Had no concept of how to explore. Damn near random selection of direction.  No surprise it can’t navigate the code version of a maze and document its path.


great_gonzales

That’s what happens when you venture into the tails of the task distribution. LLMs only really capture ngrams near the mean


UbiquitousFlounder

This is also my view. If you don't know enough about a subject to know when they are wrong, then they are next to useless.


CardiologistOk2760

Not just faster. No ads. Getting the recipe for foo-bread off Google has become, \* type in the question \* scroll past sponsored links \* scroll past news of some attention-seeking celebrity saying foo-bread is a conspiracy \* scroll past links for banana bread, sweet bread, wheat bread, white bread \* scroll past youtube videos about somebody's 15-minute video talking about foo-bread \* finally open a link about foo-bread \* reject cookies \* scroll past 1200 word explanation of what foo-bread is, why people need the recipe, etc... \* scroll past some ads \* decline to share your email address \* oh look, we can't give you the recipe until you've subscribed to our blog


thisdesignup

No ads for now.


CardiologistOk2760

Yes, this. We won't outrun the ads forever.


thisdesignup

ChatGPT will give us the recipe for cookies but will tell us to use nestle chocolate chips®


CardiologistOk2760

That's definitely one of my more immediate AI-related fears It'll tell you about both political candidates, this message brought to you by one of them


renok_archnmy

Right?!  Me: how do I make sourdough? ChatGPT: Thanks, I’m just a cat or but the steps for making sourdough are 1. Buy King Arthur flour from Kroger for $8.99 with this coupon (link) 2. Buy a Cuisinart stand mixer from target for $49.99 on sale this Sunday.  3. Don’t forget to pick up gas from Chevron on 58th street and Union Ave. on the way home. …


Skurry

Whereas with AI, the steps are * Ask the LLM for a recipe. * Google and perform the steps above to verify if the output was indeed a feasible recipe.


CardiologistOk2760

Unless you're a baker, then you know it when you see it. This is my experience as a coder. I don't remember every command and syntactic detail off the top of my head. I can usually see when it's wrong. Better yet, it simply won't run in my environment if it's wrong. EDIT: And that's the great thing if you're a doctor too. If the bot is wrong, you don't need to verify with Google, the patient will just die.


UbiquitousFlounder

Give it time, the ads will come.


CardiologistOk2760

Agreed. And because it's my opinion that adlessness is a major point in its favor right now, it will be my opinion one day that ads make it just another piece of BS. I just hope the ads are separate from the content, and not part of the content. I fear that the next form of propaganda will simply be to bribe AI to answer questions a certain way.


DielsAlderRxn87

While some of this is true, you’re definitely being hyperbolic lol


NightOnFuckMountain

I think every point is accurate but the conspiracy part. 


DielsAlderRxn87

I just googled banana bread recipes and they were all right there. So the only relevant steps he listed were: -Type in the question -Open link -Scroll past 1200 word explanation (and there was a link to jump to it if you didn’t want to scroll)


NightOnFuckMountain

It may be different on mobile. I make chili all the time, and the “jump to recipe” link is often hidden behind an ad. You also have to be sure you’re clicking an actual recipe and not a “Google Sponsored” recipe.  There’s also a “subscribe” popup that comes up before you can actually view the recipes, and the hit box for the ‘X’ in the corner is never on the actual ‘X’, so trying to close out the subscription box will always take you to the subscription page. 


Skurry

Often there's also a "jump to recipe" button.


CardiologistOk2760

banana bread is something lots of people search for. If you look for something that's not-quite banana bread, the prevalence of banana bread works against you.


soadaa

For me it's the damn fluff in way too many articles to meet a word count for sufficient ad space. It already felt like AI writing things before AI was a thing.


CardiologistOk2760

Yes, AI was available to write memos and articles for years before it was released to the public, and it's anyone's guess how much online fluff it is responsible for. You've gotta love the irony. Using AI instead of Google because Google will make you read too much AI-generated content.


lostmymainagain123

https://www.justtherecipe.com/ We dont need an LLM to get ris of garbage


CardiologistOk2760

ok, so now have justthe_ dotcom for everything that's not a recipe. The fact that that site is necessary validates my point.


Complete_Swing2148

That’s another good point


sneaky_squirrel

I just got a genius idea, it is so smart that I am going to share it with all you guys to share the wealth. We offer the AI to users for free, but have companies pay us in order to put their advertisements in the product that our users will read in order to monetize it.


horatio_cavendish

That's exactly what I use it for. It can also make it easier to learn new languages because it can highlight your dumb mistakes faster than you can find them yourself.


SpareDesigner1

I can tell you from experience that it isn’t reliable here either. I was using it to learn Kreyol and asked an actual Kreyol speaker about some of its grammar explanations and he said they weren’t just off but flatly wrong. It writes them like they’re gospel truth though.


horatio_cavendish

Sorry, I meant it's useful for learning new programming languages


ImportantDoubt6434

He’s a manager it’s his job to be confused


[deleted]

[удалено]


AutoModerator

Sorry, you do not meet the minimum sitewide comment karma requirement of **10** to post a comment. This is comment karma exclusively, not post or overall karma nor karma on this subreddit alone. Please try again after you have acquired more karma. Please look at the [rules page](https://old.reddit.com/r/cscareerquestions/w/posting_rules) for more information. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cscareerquestions) if you have any questions or concerns.*


unwaken

I used to feel about the same but now I'm a bit more cynical. I've built up a personal database of cues and context that allow me to discern answers from Google, stack overflow, docs, etc more accurately. With llms there is no transparency and I found I was spending WAY more time undoing broken hallucinations then just figuring it out myself. And these are generic, common use cases (graphql github api, basic async request, etc), not extremely nuanced code bases, where the true utility would lie. I'm becoming more and more disappointed as the initial hype died and now it's purely about practicality.


terjon

I like using it in VSCode. "How do I do X in Y language?" Forget memorizing syntax. I know what I'm trying to do, but I can't remember the syntax in the half dozen languages in our stack and all the little libraries and built it framework functions for each. I still step through the code to make sure I know what's happening since I sign my name to the checkin, but I don't waste the time Googling the syntax for some random thing I have to write once every two years.


Schedule_Left

Yes, it has it's benefits, but it's more like a trend or fad. If you don't have some sort of AI "thing" people immediately dismiss you. So all these companies hop on the wagon. It's sort of like how companies change their logos for some months because everybody else is doing it.


ZAX2717

Yup just a fad. Reminds me of machine learning, and block chain before it. Just something everyone is jumping on until something else replaces it.


thrav

ML had use cases and has just disappeared into the background of everything. (Netflix, Spotify, LinkedIn, TikTok, etc.) Blockchain use cases were a joke, so it’s dying (aside from Bitcoin being a self-fulfilling useful store of value). GenAI is more akin to ML than Blockchain. Tons of real use cases, even if imperfect. It’ll follow a similar path.


great_gonzales

GenAI is literally ML


thrav

Yeah, no shit. It’s a broad category and I’m clearly differentiating LLM from other forms of ML (as mentioned in the comment I replied to) that have been broadly adopted for many years.


terjon

I still think blockchain could have some real uses, but most of them are boring. For example, think of checking out a book from the library. If the history of who "had" the book could be on the blockchain of the library, digital book checkouts could be more accurate. Or maybe chain of ownership for evidence for digital files. Or loaning out media to friends. The idea of who has the thing when there is no physical thing makes sense from a logical point of view. I just haven't seen much innovation in the space and there are already other solutions that are less computationally intensive. It is a good general solution for a whole class of problems that have developed very fractured existing solutions.


UbiquitousFlounder

It's got some cool tricks but it's way too flaky to be used for anything serious. Once businesses realise this they'll back away from it.


gtlogic

It’s not a fad because it’s not short lived. AI is here, forever. AI will just get better and get integrated everywhere. Right now, AI is being hyped in many places, but it may not work perfectly everywhere until it gets better. For example, I don’t think it’s going to make AAA games and movies yet, but will certainly make custom pop songs (suno.ai). To think it’s a fad is like saying the internet was a fad. Sure, overblown a bit, but look how integrated it is in our lives now. AI is the new internet, but even more impactful long term.


CanYouPleaseChill

The Internet enabled mass communication. AI enabled mass generation of bullshit. Not even in the same league of impact. So far, a ton of money has spent on AI for very little ROI.


csasker

It's like when everyone had an angelfire homepage in 1998


Forrest319

Bro, machine learning is a subsection of AI. LLMs and neural networks are a subsection of machine learning. Saying AI reminds you of machine learning just reveals you're ignorant on the entire topic.


csasker

Isn't chatGPT literally machine learning? Or do I miss something 


bigtdaddy

I think it will effect different industries differently. Hard to imagine that Hollywood doesn't change with AI video and sound advancements. But you are right I am asking for a real person over an AI support bot if my money is involved


pag07

> if my money is involved but only as long as you don't pay triple for worse human servicec


DrawMeAPictureOfThis

You're just paying for a human to use the AI bot for you


ILikeCutePuppies

I think that just having a bot on your website is not the way to go for a small restaurant etc... although it's the easiest thing for companies to do. It might be ok for chains but very little value for mum and pop stores. It needs to be integrated with phone systems, with the pos and also a tablet for the owner teaching it new things. The reason is that taking phone calls while somewhat valuable can require additional work by workers. Sometimes it is all day and sometimes not. People who call either have a question that can't be answered by the web or prefer to call rather than use the internet. When someone calls, the AI would answer. If it doesn't know the answer, it would send the question to the tablet. The owner could pickup the call or simply select the answer. The AI would learn that for future people as well. The AI or owner would need to specify how long the knowledge lives for. A question about times you are open us very different from a are you hiring question which is very different then is the item in stock and can you reserve it. The AI could also do things like verify of the store is open by looking to see if any cashiers have checked in or searching the web (where the owner has likely updated their times for special occasions). It would also keep track of all conversations and the owner could use that to tweak responses etc... The service would have to be pretty cheap as the value add is not that huge and owners would be suspicious. Something like 20-50 a month. It's utility is mostly about providing better customer service by not having to choose between a call, someone in the store, or getting that next batch of cookies done.


Aazadan

Chat bots don't provide better customer service, they provide cheaper customer service. Any company that uses them for this, should fail. They won't, because going to chat bots doesn't make them fail as is, but it's not any sort of meaningful improvement for the customer or business.


Legitimate-School-59

The irs uses chatbots for phone calling, and boy that was unfathomably rage inducing.


ILikeCutePuppies

You are comparing a chatbot verse someone actually taking a call. I am comparing a chatbot with knowledge against no call. I would much rather have information and to be able to change orders or whatever than no information and not be able to relay important information. Also, I am talking about one that actually works.


Aazadan

Ai based one’s work no better than previous ones. They pull from the same data sources, they just construct sentences better.


pydry

It'll follow the same hype cycle every hype train does: https://en.m.wikipedia.org/wiki/Gartner_hype_cycle


Own_Annual1199

Like with any new ‘disruptive’ technology, there will be winners and losers. It won’t go away, but in 10 years it will be consolidated into a few useful applications, not 100’s of small startups claiming to have the ‘next big thing’


Sprootspores

kind of over hyped and under hyped. i think there is a lot of noise around replacing humans that is extremely premature, and mostly frustrating and nonsensical. also, the use of LLMs for the sake of creating art is fun and interesting, but the fact that it is being monetized is depressing, and imo, stupid. I honestly don’t know how far that aspect of the tech will go, because it really matters how much the market will tolerate when it comes to AI art, video, and writing. however, i do see a lot of more serious or hostile to tech folks not understanding how powerful a tool this is and just how fast the tool is improving. it really could change how folks interact with their computer in a revolutionary way in the next 5,10 years. 


goldenfrogs17

Implementing AI , or just re-labeling existing tech?


FrostyBeef

It won't "derail" in the sense of some explosive boiling over point where a bubble bursts. It will slowly fizzle away though, where it eventually settles down and is only used for applicable use cases instead of every company using "AI" as a buzzword to cash in on a trend. Just like machine learning did, just like blockchain did. This isn't new. This is how all tech trends go. We'll see it happen with something else in a couple years.


panthereal

I can't imagine it derailing because the models are too useful. It's quickly becoming the opposite of cloud computing where I can gain massively useful compute locally from open source models. Feels like I'm downloading skills from the matrix; I know kung-fu, can bend spoons, and translate languages without requiring a permanent internet connection. At most I would expect we call it something other than AI once it's progressed enough the same way it quit being machine learning.


FailosoRaptor

It's part bubble, part real. It's not ready yet, but in 5 years, if the growth continues even at the same rate, it's the new normal. It's like when the Internet came out. It's not just about generating good code, it's more about the ability to take lots of data and find patterns. Basically, classic ML, but super charged and with a much easier UI. Anyway, it's a bubble, but only if this is a plateau. But professionally, I think we're about to see some wild capabilities in the near future. We got hundreds of billions being poured in. GL everyone.


hirako2000

It's not like when the internet came out, as nobody cared about it for decades until they realized letters, news, videos, music, banking, spreadsheets etc all were there, virtually free. So they bought a pc, later smartphones, and nobody spent half his time on social media trying to warn us how it will revolutionize everything until it was accepted. No The AI hype is due to a combination of factors. 1/ it makes the subtle and sometimes even explicit claim that it replaced people. And people are expensive, so anyone on the paying side of wages will a/ be super keen to hear about it and b/ willing to spend his time educating the world on how what come out of AI is great , as in good enough. 2/ VCs, investors in general, are always on the look out for the next big thing, but lately profit has been rather grim, there seem to be slop slowing down hyper growth for many field of businesses, call it what you want, but what it means is money is listening for an oracle to announce Prometheus. Coupled with 3/ AI is a confusing thing, it's a bit like magic, in many ways it is magic, very few know the tricks and most are blown away. 4/ in this day and age the surge of wanna be content creator reached mainstream adoption , a lot of noise so it's become a competition for grabbing attention. Fear, magic, sex are top notche to grab attention, Ai is magic, it's sexy and scary. Hopeless content creators write about AI, to get attention. In between and through is a lot of wishful thinking, naivety and/or hypocrisy. I could add another interesting factor, we are witnessing, most don't see it, a capital vs worker war, at least in tech. The capital will piggy back on any trend that will scare the other side. Happy 2024, it will be another year of LLMs making a lot of noise, there is no escape, disconnect and go read some books , that's what I do, but can't resist lurking on subs from time to time


Dr_CSS

I believe AI or automation which replaces labor must be legislated to still pay taxes or some sort of compensation for the labor that was displaced. AI is good, but companies should not be allowed to run rampant with it. Alternatively, there should be a redistribution of power in the company so that the worker has more say in things, possibly even a combination of all the workers in that industry, and they can all pay a small fee every month into a pool in case shit hits the fan. Together, these workers can collectively bargain with the companies and have leverage vs automation displacing them, hell, we should even make this a federal thing and give protections and political power to these worker groups so they can combat private industry lobbying. And just maybe, if the people in software can get their heads out of their asses, we can make one of these worker group in the very place the AI is being developed, and essentially "capture" it to be under worker control.


csasker

The growth will never continue the same rate for hyped tech


Moist_Scar_63

It will. Someone did research and found that the Devin AI demo a month ago was fake af


4Looper

They faked something that wasn't even impressive? Didn't it solve like less than 15% of tickets unassisted?


Moist_Scar_63

Exactly that’s how overhyped this is


BellacosePlayer

16% accuracy with human assistance being required to set up API definitions on problem sets that are public on github and therefore part of the LLM's training data. Wave o' da future


NewChameleon

lookup Dutch Tulip bubble or British south sea bubble the short answer is yes the longer answer is the market can remain irrational longer than you can remain solvent


TokenGrowNutes

The hype train will end when the companies that attempt to replace developers with AI crash and burn, and share the case studies. Some case studies will materialize from this.


Olangotang

They will crash and burn, but AI development will continue on.


BootyMcStuffins

Just got back from Google cloud next. Some of the tools coming out from Google are going to be a game changer. And I'm not talking about more chat bots or code-writing tools. I honestly can't wait to get my hands on the Vertex AI reasoning engine. Grounding and RAG will make AI way more practical for way more purposes than they're used for today. Think about small models that can be trained quickly, but pull from your entire data corpus in real time. We haven't even seen what AI can do yet


mxldevs

>When will they realize absolutely nobody uses the AI customer service bots they have embedded on their websites? Is this true though? Whenever I have an issue I go straight to contacting support. I doubt I'm the only idiot that can't figure out how to read manuals


sshan

The current generation of LLMs are very useful in high-value high-error tolerance workflows (ie. business). People make mistakes filling in TPS reports all the time or auditing process etc. There is a ton of value to be gained with them but the current gen won't change the world. However, we don't know where this technology will plateau. If agentic behaviour increases sufficiently it could be another tipping point where a TON of things are automated. It really depends where that plateau is. This current generation isn't going to be super intelligent but it could be wildly successful at a ton of 'boring business process' and other types of jobs.


usrlibshare

>When will they realize absolutely nobody uses the AI customer service bots they have embedded on their websites? You know this...how exactly? Do you have usage data, statistics, surveys from companies?


Redditor6703

I'm surprised that so many companies use LLMs to build chat bots, but very few of them use LLMs to parse and structure data. I built a job board that analyzes and annotates job postings for SWEs with metadata that you can filter on. On vast majority of job boards, including major ones, job postings are unstructured data which you can't filter out by criteria such as security clearance requirements, visa sponsorship, degree requirements, role category (frontend, AI, data science, etc.), programming languages (unless you use search keywords, but those won't filter out C# if a job mentions .Net and doesn't mention C# explicitly) and so on. There's so much data on the internet that would be easier to work with when parsed, but parsing unstructured data without AI is hard, so I think that after the chatbot and assistant hype train dies down we will see more useful applications beyond basic chatbots and assistants.


Dr_CSS

Is this a public job board?


Redditor6703

Yes, you can find it in my profile.


Left_Requirement_675

Good thing I always tell people on here to get their masters and focus on AI. Less competition as I focus on the core, stable, and in demand technology.


arthurstaken

elaborate?


NomadicScribe

>the core, stable, and in demand technology What technology do you specifically mean?


SerialH0bbyist

Companies going for 100% automation are going to have a bad time. 80% automation plus a simple UI to do the rest is the way to go


rbeld

There are a fundamental problems with LLMs that will never make them what people in the mainstream actually think of when they think of AI. Hallucinations specifically... People will consider it unacceptable that answers to simple questions will just be wrong some percent of the time. Companies like OpenAI don't have a solution and when asked about it they pivot the conversation. The results of these models are surprising and interesting but they are not useful. In fact they're likely to get less useful. The models are inevitably starting to train on the output of other models and becoming incestuous. Habsburg AI is coming. I think the money will dry by the end of the year. Altman saw a window to vacuum up a bunch of money and took it... AI will return to being an academic pursuit for another decade. It's the same cycle we've seen every decade around AI since the 80s. This one just has mainstream attention this time.


Careful_Ad_9077

Yeah, I remember the hype about the internet.


sorryfortheessay

In my opinion- no it wont end. My reasoning is that the world is data. By our very nature we have 5 (or 6 depending who you ask) senses which have the sole purpose of providing us with data about our surroundings. The question of if AI is going to become easier, more rObuSt, and more convenient than many existing systems at some point in the future seems undeniable.


StackOwOFlow

Most of the AI wrapper companies will bust.


tristanAG

I use chatgpt for work all the time to help me code faster, it’s amazing. I think there will be a lot of amazing use cases, but there are going to be limits to llms. All this ai stuff will eventually be seamless on products


xabrol

There will come a point where the demand for generative AI dies down and the hype has blown over. But it'll resurge every time there's a major breakthrough. And it'll resurge as hardware becomes optimized for it and drastically cheaper.


Exciting-Engineer646

LLMs are pretty similar to AI driving: they will have pretty quick progression to working 99.9% of the time, but still have some pretty catastrophic failures (hallucinations, safety, vulnerability to attacks). If we still need a human to do basic work, how large is the value proposition of something like Copilot?


totaltasch

We had accounting software demos and each one of them made sure to have some “AI” feature. And my director got really wet listening to the marvels the AI could perform


Caldrex2025

Nobody knows. Anyone who claims to know is just bullshitting. The current applications of LLMs are much more applicable and accessible to the common public than ever before. Does that mean an AI winter won’t follow? Maybe. Maybe not. Hard to say. My two cents having done research in Explainable AI as well ML industry work for a financial firm: This time around things are a bit different.


GarageDrama

All I know is that I built a pretty complicated web app for a client tonight just using prompt engineering and AI. I literally wrote no code. I just wrote 40 or so text commands. Once or twice I had to go into the sdk codebase to find a method because the AI was hallucinating a bit. It would have taken me 3 nights to build that app a year ago. On the other hand, I’ve built apps like this before, and I can recognize when the AI is just wrong. The average person would never be able to replicate what I did tonight because they don’t know how to build software and code. If you trust the AI, or have no choice but to trust it, you will build nothing that works in the end. I felt like I was the copilot, tbh. But I’m the captain flying with a rookie. So it’s the good and the bad.


clingbat

>I felt like I was the copilot, tbh. But I’m the captain flying with a rookie. Funny given MS's copilot naming and marketing.


Dreadsin

I’m going into AI and I’m not too afraid First of all… investors are actually kinda dumb. Doesn’t matter if it’s not good or overrated or whatever. If the money is there, it’s good to be in Secondly, AI/ML will always exist in some form. Instead of thinking of it as AI, think of it as basic pattern recognition Finally… blockchain is still around and I can barely think of any practical use cases for it, and the use cases that it does fit are boring and unmarketable Well all be fine


Due-Ad6556

What about the plethora of startups by every Tom, Dick, Harry in the valley who are making just a wrapper around GPT-4 lol


Slow-Enthusiasm-1337

It’s a big mess right now, but there’s some real power in code generation and code understanding. I think most engineers have seen GPT4 and or copilot and had an oh wow moment. Of course these tools also hit limits real quick. Ask it for terraform and see all the corner cases it hasn’t thought through etc. HOWEVER, amidst the noise there’s some real power here and we are only starting to leverage it for enterprise use as an industry . Think AI Agents (see AutoGPT) all talking to each other. I suspect Devin everyone is freaking out about does something like this. I don’t see most dev jobs automating and I do see the hype mess dying down but I also think companies will have actual AI experts emerge who think of out of the box applications for custom training and AI agents to perform specific tasks like automating huge sets of manual test cases etc. already lots happening in the public in this space. I’d hate to be an AI Philosopher bag holder when this hype dies down, but I’d love to be a practitioner with proven results when companies realize narrow use cases can win huge ROI.


mrh0057

Based on past history of this field yes it go down again. People forgot that computers, phones, cars, etc. is filled with AI algorithms and often designed with assistance of AI. The things that work and are useful, say what it does or problem it solves not the fact that it uses AI algorithms is rarely mentioned.


Vaxtin

AI is dependent on new frameworks. Only reason why there’s currently an AI wave is because of the Transformers model. Neural Networks are incredibly lackluster and aren’t able to learn as much as they seem to be able to. Once we exhaust the transformer and realize its limits (like how we did with neural nets) the hype will die down. I really think the people who think AI is coming to end humanity/jobs or whatever the fuck goes on in their brain have no idea how any of it works. That includes the folks on this subreddit. Then 10-20 years later another paper comes out that makes waves.


Dr_CSS

Transformer is a type of NN though


Vaxtin

It’s a new framework for it. Same as a convolutional network, but specialized for sequencing tokens.


tr14l

Dude, the AI iceberg just got discovered 10 years ago, and we just figured out efficient ways to tune hyper parameters for these massive transformer networks. Buckle up, it's a new door open. What you're saying is similar "when is this while Internet think going to pop". It's not. It's going to become a permanent fixture


scrapethetopoff

I feel like it’s already derailed. The Devin demo was fake, chat gpts output recently has been complete dog shit. I think everyone has realized that it’s here to assist but won’t be taking dev jobs entirely in our lifetimes.


theoneandonlypatriot

There are so many people in skilled programming careers that are just completely dismissing the power of these things. I think a lot of people are in for a rude awakening.


NeighborhoodMost816

It would exponentially increasing in the upcoming years.


Bitter_Care1887

based on what exactly?


RealNamek

The AI is like the internet. It's not going away.


notokstan

LLMs are not intelligent. They generate results based on probability, that is why it's extremely hard to reproduce the same result from a prompt and why they hallucinate. It's a technology that is being hyped on what it may be able to achieve, not on what it can achieve and worst of all it requires extreme amounts of training data (and computing power) to the point that some people are thinking about generating artificial data for it to consume (that won't go well). It use cases seem to be basically minor photoshop edits or a personal ghostwriter you need to double check on in case it starts producing garbage. The problem right now is VC money that is always on the next wild goose chase and after the crypto fiasco this is the next shinny thing. Also, in the US at least, they are one copyright infringement case away from being useless. Everyone trained their model on data from the Internet which contrary to popular belief is not free for anyone to use as they wish.


Mediocre-Key-4992

Probably not soon. I mean you asked about it just like everyone else has. There will probably continue to be a never ending stream of clueless people who buy into things way too much with no real investigation or non-trivial knowledge on the subject.


Sumara12

It's extremely useful tech but a lot of current AI is just a buzzwords being used to hoodwink people for investor capital.


arthurstaken

It will change the landscape of technology to an unrecognizable level, but I still believe that advent of AI is adjacent to the dot com bubble.


[deleted]

[удалено]


AutoModerator

Sorry, you do not meet the minimum sitewide comment karma requirement of **10** to post a comment. This is comment karma exclusively, not post or overall karma nor karma on this subreddit alone. Please try again after you have acquired more karma. Please look at the [rules page](https://old.reddit.com/r/cscareerquestions/w/posting_rules) for more information. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cscareerquestions) if you have any questions or concerns.*


top_of_the_scrote

GAN? Drake Point (glans) Nah objective driven AI I heard not sure what means


qubitser

no


rejectallgoats

Look up the “5th generation computer. “ and the AI winter. Compare to things like “full self driving in 2 years.”


[deleted]

[удалено]


AutoModerator

Sorry, you do not meet the minimum account age requirement of **seven** days to post a comment. Please try again after you have spent more time on reddit without being banned. Please look at the [rules page](https://old.reddit.com/r/cscareerquestions/w/posting_rules) for more information. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cscareerquestions) if you have any questions or concerns.*


dongee

Focused RAG type workflows integrated into large companies are just starting. Having context integrated with business knowledge graphs vs armies of people with spreadsheets is going to drive efficiency gains... generic chat isn't the gold it's process efficiency change


[deleted]

[удалено]


AutoModerator

Sorry, you do not meet the minimum account age requirement of **seven** days to post a comment. Please try again after you have spent more time on reddit without being banned. Please look at the [rules page](https://old.reddit.com/r/cscareerquestions/w/posting_rules) for more information. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cscareerquestions) if you have any questions or concerns.*


k0fi96

The train will quiet down much like "big data". Every good company is using it but they dont need to shove it down your throats because the results speak for themselves


Zeevy_Richards

Yes


Empero6

It’s a tool. It helps a lot, but you just gotta give it a second take sometimes.


KaaleenBaba

The amount of time chatgpt works flawlessly is less than half. A lot of features of these AI tools are cool but I don't use most of them.


renok_archnmy

>to your local frozen yogurt shop is implementing AI in some way. Except they aren’t. They’re blowing tons of money on snake oil right now to say they’re “doing AI.”  It’ll detail and fall into obscurity like everything else. In the meantime, too many regular people will lose jobs over it that will never be recovered. 


S1eeper

There will be ups and downs, peaks and troughs, as usual with any new tech. But it will probably be on a general uptrend for the next decade or so, as all the low-hanging fruit gets discovered and hardware races to catch up and optimize for it.


honey495

Yes just like EVs I think this will correct itself sooner or later. I don’t think AI reaches fully self-sufficient status but it will help us with processing vast volumes of data and inferring things and helping people through recommendations and fulfilling basic tasks but I don’t see it ever fully automating a major piece of society. Especially full self-driving cars. I think it drives too cautiously and cannot handle ever task thrown at it intuitively. But self-driving within the same lane is something that reliably takes some burden off of people while driving. Decision making has a lot of opinionated and biased approaches to it. AI will be too cookie cutter in those realms to serve us reliably imo


thenorussian

it's hard to predict the future too far out, but over the past year I'm pretty convinced most consumer LLM features will increasingly fade away into right-click menu drawers.. Cut, Copy, Paste, and... 'Summarize' or something. A chat interface will suddenly be more integrated with the operating system UI, and that's mostly it. it was novelty for a few months, but we're all getting kinda bored of creating silly pictures of nonsensical fuzzy creatures or garish sci-fi / synthwave motifs. It's not sustainable at $20/mo. I don't even know the profitability of the technology at that price. Only the major tech companies are equipped to absorb that cost or bundle it with larger cloud subscriptions (Google One, iCloud+, M365) It's transformative in *very specific* conditions, but is not meant to be such a standalone feature like many are trying to turn it into.


herendzer

It’s gonna stay until something newer comes


VoiceEnvironmental50

Companies like google have been doing AI since the mid 2000s. Just because gpt-4 is popular now doesn’t really change much for the places that have been doing it forever.


[deleted]

[удалено]


AutoModerator

Sorry, you do not meet the minimum account age requirement of **seven** days to post a comment. Please try again after you have spent more time on reddit without being banned. Please look at the [rules page](https://old.reddit.com/r/cscareerquestions/w/posting_rules) for more information. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cscareerquestions) if you have any questions or concerns.*


lIllIlIIIlIIIIlIlIll

AI **is** the future. The singularity will happen. One day. However, LLM is not Gen AI nor anywhere near it. LLMs are like blockchain in that it's a hype train. It doesn't really add the value that you think it adds. If you used blockchain for your taxes... how does that help you? Would you pay $20 to use blockchain to do your taxes, which you could do with $20 still in your pocket? Similarly, do you need to use an LLM to do your taxes? Not really.


Tooluka

>When will they realize absolutely nobody uses the AI customer service bots they have embedded on their websites? This is easily solved by disabling or making unusable all other "old" communication and support channels. Welcome to the new world :) PS: I predict that soon there will be companies or pricing tiers for products, who will advertise a complete removal of neural net features for a modest fee.


Sky-Limit-5473

There are always hype trains. Sometimes they really do stick around. I remember when the internet was becoming a thing. Alot of people looked at you crazy if you spent 500$ getting a computer just to go on the internet. Now everyone has a computer and is on the internet. I would say 1 out of 10 hype trains stick around permanent.


VanguardSucks

Absolutely. Like block chain crap, it is being overhyped right now. The more time I spend with ChatGPT the more I realize it just spits out garbage masquerading as intelligent responses. Don't believe me ? Start a simple software project and prompt ChatGPT to write functions one by one and put them together and see if they run


Upstairs_Big_8495

They hype is already over. [https://techcrunch.com/2024/04/15/investors-are-growing-increasingly-wary-of-ai/](https://techcrunch.com/2024/04/15/investors-are-growing-increasingly-wary-of-ai/)


VigilOnTheVerge

I think the experience people have with applied AI will change rapidly over the next 6 months. I have already had a handful of AHA moments with applied AI in searching company documentation, using large LLMs for general task q&a, etc. Less so with customer service bots, because frankly the current iterations seem to be very light implementations of RAG without a considerable amount of thought applied into customer request paths even though companies must have and abundance of data to build an extremely robust customer service AI. As engineering teams or startups realize this, and the models continue to get better with either fine-tuning, context increases, or general new model releases, the effectiveness of these applied AI solutions will only become better.


terjon

I mean, I am seeing some truly bizarre uses. I just saw Redfin (the real estate website) has something like Stable Diffusion in their site where you can redesign a room to see what it might look like if you remodel it. I don't see where the ROI is on that one personally. Probably just burning money in Redfin's budget. However, I don't see the hype train derailing until we start seeing a bunch of these startups start to fold and the big useful stuff start to crystalize and stabilize with fixed use cases. I do think that after a while we'll kind of reach diminishing returns like we did with cell phones and smart assistants and smart homes and laptops and TVs and performance cars. It becomes more of a niche product and most people just go back to lives using the tech like it is just normal.


jackoftrashtrades

Would you like to listen to the 2.5 hour musical with full chorus and orchestra I created in 37 minutes?


alfredrowdy

IDK, it feels a lot like the early days of the web. Everyone knows it’s going to be big, but people don’t know what or how to use it effectively yet. It feels exciting in the same way that there are a ton of young people getting in and driving things at a very rapid pace. It feels like the days when few people knew how to do web apps or few people knew how to do mobile apps all over again. You can become a valued expert quickly because there is a lack of expertise in the field People with 1 year of experience creating generativeai powered apps are the experts, so it offers an opportunity for young people to grow their careers with little competition from entrenched expertise. IMO the actual models will become commoditized just like Apache or MySQL in the web days. The money will be using it to build new products. It’s going to be huge for the engineers that can start getting in the ground floor today.


clingbat

For all the LLM stuff going on right now, no one has really figured out a solid and sustainable way to monetize it yet. So long as that is largely the case, it's a fad more than the future no matter how neat some of the use cases are. If you can't generate consistent dependable revenue with a clear path towards profit from it, eventually people realize it's just eating away at their overall business model as a largely dead cost. Has nothing to do with the tech and everything to do with the business/value proposition.