T O P

  • By -

AutoModerator

Hi all, A reminder that comments do need to be on-topic and engage with the article past the headline. Please make sure to read the article before commenting. Very short comments will automatically be removed by automod. Please avoid making comments that do not focus on the economic content or whose primary thesis rests on personal anecdotes. As always our comment rules can be found [here](https://reddit.com/r/Economics/comments/fx9crj/rules_roundtable_redux_rule_vi_and_offtopic/) *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/Economics) if you have any questions or concerns.*


Fallsou

AI is a Flareon apparently AI is not as useful as it's fans think it is, nor as useless as it's haters think it is. It's a tool, and like any other, it needs to be applied well in order to create value. It's not going away and people need to come to terms with it's existence


UnaccomplishedBat889

This. No one contends that AI is perfect and can do all. It's just a tool that happens to do some valuable things better than we previously could. Plus, we've only seen a tease of what AI is being developed to do, so I wouldn't judge the technology based on the odd interactions we've had with Bing or Bard---though those applications are valuable enough that we are now using them more than search engines for many web searches. But no, they are not perfect, nor will they ever be perfect. They are just tools that allow us to do things we couldn't before. That is all. And frankly, that is plenty.


Illustrious_Wall_449

Given the way AI companies are being priced right now, a change in sentiment here could singlehandedly kick off a recession.


LessonStudio

I work in ML and use LLMs for coding, business, and other help, and also use them directly as a solution. But, I am not an expert in the guts of LLMs. What I do have is decades of experience with software development. I have an entirely gut feeling that LLMs have a math problem. That is, to greatly exceed what they do right now will take far more resources than even moore's law is going to soon provide. I often ask LLMs to write a meeting agenda, a response to an email, or some basic code. If I keep it simple and without any secondary levels of thought, they are very good at this. But, a very simple threshold gets hit where even a child in grade school would do far better. For example. I have a language tutorial. The sort of text that says things like: The French for yes is Qui. The text wasn't very long and I asked it to put a <> around any French text. It entirely blew this task. I suspect a halfway competent child in grade 2 could accomplish this task. LLMs are basically like people with extreme rote knowledge but no practical skills. This is fine as it is handy to have someone around with extreme rote knowledge for certain problems. This extreme lack of practical skills is why someone was recently able to get an LLM to say how the best way to keep the cheese on a pizza was with glue. These LLMs don't lie because they are evil, nor because they are stupid. But because they don't have a practical sense that what they are saying doesn't align with reality. Where these lies seem to form is when the LLM doesn't have the rote knowledge, so it grasps for vaguely related knowledge to fill in the blanks. I highly suspect that if the score for how confident the LLM was were available it would be near 100% when you ask it the Capital of the USA, and it would have been much lower when suggesting glue for pizzas. What really fools us is how extremely well it writes both answers. They read like that kid in HS who had essay writing down to an art.


nanotree

Also a software dev with some minor education on the guts of ML primitives. LLMs aren't intelligent. The psychology behind why we are easily fooled into believing there is any intelligence going on is a parlor trick. It says more about how we perceive intelligence. That's not to say that LLMs aren't neat tools with lots of possible use cases. But they only mimic a fraction of what a human brain can do. You don't need math to understand why. The standing theory has been that if you feed enough data to LLMs, they will eventually be able to "understand" language and the concepts behind the language will begin to materialize. Language is not the substance of thought in humans. That comes from somewhere else. We use concepts to break down the complexity of reality and we use language to communicate those concepts. Intelligence is the response of our consciousness to external stimuli. Language exists because of intelligence, not as the precursor to intelligence. So IMHO, LLMs don't approach consciousness nor intelligence on a fundamental philosophical level in a way that can produce intelligence. It's just the wrong approach for general artificial intelligence. Maybe LLMs will be the translation piece on top of some really artificial intelligence one day. But their not going to magically become intelligent just by feeding it a bunch of text.


Lyrebird_korea

When I first tried ChatGPT and asked it questions about my field of expertise, I was flabbergasted. Every answer was correct, and only a handful of people in the world could have answered those questions. One of the areas is image analysis, and ML is fabulous. Last, I work with Chinese students who have become much more productive since the AI boom. Their writing has improved and there is less work for me to correct them. Granted, there is a lot more fluff and BS, but in here lies a positive message: Am I afraid to lose my job? No. AI handles mundane mind numbing tasks very well. It is excellent at being perfect. But it is terrible at being creative or innovative. It is stupid and fails to handle simple logic, and as a result outputs nonsense. Any post about the dangers of AI should be taken with a grain of salt. Any post about how we are going to lose our jobs? Idem dito. It will likely lead to more jobs than fewer jobs, and it certainly will improve productivity.


kiyonisis_reborn

To be fair, there are a lot of...less than optimally intelligent...people in the world who perform mostly mind numbing tasks. There are a LOT of jobs that will be made redundant by AI. Cearly innovative and creative jobs are not at risk any time soon, but there's a good chance that it will still be significantly disruptive towards employment and people who simply don't have the capacity to perform at higher cognitive levels are going to have a rough time.


myhappytransition

> , but there's a good chance that it will still be significantly disruptive towards employment and people who simply don't have the capacity to perform at higher cognitive levels are going to have a rough time. you could say the same things about * fire * the wheel * the cotton gin * The written word etc. Thats pretty much how the advance of technology works, and always has.


Dirks_Knee

I would add that a lot of work people call creative is highly derivative, and AI excels at that type of "creativity" as well.


snek-jazz

> But it is terrible at being creative or innovative. I was speaking to someone who works in the Animation industry lately and he told me that the job of mock-up artist is basically already gone because AI can do it.


Lyrebird_korea

AI can answer more than 90% of the questions doctors get asked. But this is a leap forwards, not backwards. This technology will do more good than bad.


mtbdork

The danger of AI is how it can confidently misinform an uneducated individual.


Lyrebird_korea

True. Schooling is as important ever. But this could also fit in the realm of misinformation, with the misinformed screaming about misinformation, while they are the ones who are misinformed.


wildemam

Even responding to emails may provide a very undesirable output. Many times very delicate tone or expression differences completely obscures my actual intent of the mail. These LLMs cannot grasp that I am trying to pinpoint liability, ask for a favour, prove a point, let someone interfere, or any other function an email can serve. It just write in its mechanical precision format.


waterwaterwaterrr

I tried like hell to get it to generate a simple mnemonic. It could NOT do it. I even gave up on wanting it to make sense. It still couldn't hit all 8 of the letters or in proper sequence. The thing isn't dangerous, what's dangerous is that these dumb things are going to be given responsibilities they can't handle.


PM_me_your_mcm

I think this is the third time I've seen this hype cycle.  Unfortunately for me this is the first time that it didn't get me a big raise.  I keep telling people the Terminator isn't coming for them, we're still struggling to match up names better than a human, but it always falls on deaf ears when NVIDIA being worth more than Germany is on the line.


Tactical_Laser_Bream

chubby shaggy resolute aware snobbish imminent fall scarce sophisticated roll *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


PM_me_your_mcm

It's the blatant dishonesty of claims like that which drive me absolutely insane.  Chat GPT is amazing and a valuable tool but it absolutely isn't as intelligent as a high schooler and anyone who actually understands the technology would never make that comparison to begin with.   It's quite impressive that it can basically fool a person into thinking in those terms, that this machine is actually thinking, but a big part of that is that the models and technology have advanced to the point that it's easier to imagine it as thinking than it is to conceive of it as a big pile of linear algebra that's picking the best next word of the response.  People tend to struggle with basic statistics, it's not hard to imagine them struggling to process the idea of a program that picks words based on some form probability in response to a prompt.


Tactical_Laser_Bream

smoggy yoke fly society dull reply pet axiomatic instinctive bells *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


PM_me_your_mcm

That is the best analogy for this I've heard and I'm probably going to steal this at some point after doing a little more homework.


Accomplished-Gear527

This is what I want to scream at people...it's just a bunch of weights in a model being adjusted at large scale. It is good for a lot of things and has some pretty good use cases...but I've also seen it royally mess up stuff. I honestly don't quite understand the unit economics behind it, all things considered.


welshwelsh

>it's not hard to imagine them struggling to process the idea of a program that picks words based on some form probability in response to a prompt. I think you're missing the mark here. It's not that hard to grasp that LLMs are selecting one word at a time probabilistically based on their training data. The unspoken assumption you have is that humans are substantially different. Do we have any reason to believe that human thoughts are not generated one word at a time in a probabilistic manner in response to a prompt (such as an visual stimulus or a question from another person) just like an LLM? I am strongly inclined to think that they are. >Chat GPT is amazing and a valuable tool but it absolutely isn't as intelligent as a high schooler A very puzzling statement. Considering how useless and unintelligent the average high schooler is, I don't know an LLM that is even less intelligent could be an "amazing and valuable tool". Again, you are overestimating the capabilities of humans.


PM_me_your_mcm

Let me rephrase my point there a little.   I think that for the average, everyday person, one who is more concerned with baby pictures on Facebook, making their car payment on time, and getting dinner on the table who has little to no experience with statistics and programming, for that person when they interact with a LLM it is much easier to imagine the machine on the other side as being a thinking, feeling thing just like a human than it is to conceptualize that it is the culmination of a lot of linear algebra constructing a response based on probability and a large dataset because that's beyond their normal, day to day experience.  It seems like a thinking person to them and does a very good job of it and they have experience with people so that's how they look at it. I think one of the key differences between your suggestion and mine is that you're talking about expressing a thought as a probabilistic function of picking words, and I would refine that by saying that language and communication for humans is almost exactly that process, but I don't think the ability of a program to convincingly construct language is equivalent to thinking. That's also why I say LLMs are absolutely valuable.  They have definitely figured out a piece of what a generalized intelligence would need; language.  Trying to define what qualifies as actually thinking is more a philosophical question, which is probably the source of a fun conversation over drinks with the right people struggling to answer questions that might be beyond answers, but for me and I think a lot of people the concept of thinking, as a human does, involves a bit more than just the process of translating the thought into language and I think LLMs are good at the latter and not so great at the former. Let me rephrase on the high schooler bit as well; it's not that I think either is smarter or more or less useful than the other when comparing an LLM to a high schooler, it's that I don't think they're the same thing.  If the question is which can write a better essay on Shakespeare I think we know what the answer is but intelligence, or thinking is an axis I can't compare them on because I would assert one does and the other doesn't and at the moment can't.


SomeRandomGuydotdot

1) The entire point of using a non-linear activation function is that is simplifies the proof for universal function approximation. 2) While there may be a single layer equivalent, that's still up for scholarly debate, it's almost certainly less efficient, but hey, keep shitting on people that don't understand basic statistics.


lonestar-rasbryjamco

Just yesterday I had ChatGPT print out a basic probability function and then make up a result. As in, not even close even though it had the input and the formula right. Which should have been in the bag for a LLM. If it’s a high schooler right now, it’s one of the dumb ones.


MagicWishMonkey

hah, yesterday I asked it to generate javascript to draw a polygon of a texas house district with the google maps api and it kept drawing random shapes until it finally admitted that it didn't have access to GIS data. lmao


Mobely

Did you use the wolfram alpha plugin? Llms cannot do math


lonestar-rasbryjamco

Good question, I did not. Come to think of it, it was actually the enterprise version of CoPilot, as I was using it as part of VSCode. Not chatGPT. The main take away was just how confident it presented the results. It even diagramed the formula correctly. But then, without breaking stride, it spit out a probability that was significantly off after I checked the math. The dang thing even had the gall to argue with me until I walked it through the correct way to compute probability of an event over a series. Using its own dang formula!


complicatedAloofness

These posts aren’t as convincing as you think as there is significant output variance based on input skill


lonestar-rasbryjamco

🤣 “Input skill” 🤣 Did you not get the part that the model recognized the input, had the correct formula, then spit out the wrong results in spite of itself?


complicatedAloofness

I stand by the comment


lonestar-rasbryjamco

LOL, okay. 👍 But arguing this is some kind of PEBKAC error is just downright silly, no matter how pretentiously you dress it up. Even more so when you consider the entire *point* of LLM’s is a seamless conversational UX.


complicatedAloofness

That is not the entire point of LLMs and why OpenAI is more focused on having others use their API to refine and develop applications


alexp8771

I don’t think anyone actually believes this. It is pump and dump scheme worth hundreds of billions that everyone in the entire tech industry is in on. This has got to be the biggest fraud in history.


AmethystStar9

Ed Zitron hit the nail on the head. Big Tech is running out of new gimmicks to sell as the next big thing, but they've conditioned themselves and their stockholders to expect never ending growth and an endless supply of new worlds to conquer, so they have no choice but to go all in on every "new" idea that passes by. AI is just Google Glass, the Metaverse, etc.


LillyL4444

I bought a new clothes dryer. Just like my old dryer that was 15 years old, it has a moisture sensor to know when the load is dry. But new dryer calls this exact same basic feature “AI Dry”


JohnLaw1717

I feel like technology has permeated every aspect of our lives. The metaverse and google glass aren't representative of what's going on with technologies overwhelming success.


AmethystStar9

Well, of course, technology has permeated every aspect of our lives. I'm not sure if you took my comment as a condemnation of technology as a whole as useless, but that's not what I'm saying. What I'm saying is that Big Tech has innovated up to the realistic limitations of what is currently possible, but are beholden by financial and market forces to continue inventing and innovating in large, impressive and sexy ways as opposed to improving and refining what already exists in smaller, less impressive and less sexy ways, so they're forced to invest the GDP of several nations into dead end, go nowhere, useless but impressive sounding ideas like Google Glass, the Metaverse, cryptocurrency, NFTs, AI, etc.


JohnLaw1717

The metaverse and NFTs were made by outsiders. The metaverse was then coopted by Facebook and speculators rushed in and ruined a kinda neat fun little idea of NFTs. Initially, they weren't proposed by corporations to appease shareholders. They were just experiments played with by enthusiasts. I don't think AI is comparable at all to those movements. Except in that it was outside groups having the initial breakthroughs and it being coopted by corporations.


AmethystStar9

Everything has to be invented by someone. Who invented what is irrelevant to the point, which is that the larger tech industry rallied around these things and pumped massive amounts of capital into them, only for them to turn out to be farts in the wind. Mirages.


backnarkle48

If the promise of AGI is the biggest fraud in history, then cryptocurrency is a close runner up.


MaleficentFig7578

It's not that GPT is good, it's that it doesn't need to be good because the jobs it's going to replace weren't actually doing anything useful. Nobody reads the TPS reports, so who cares if an AI wrote them?


wildemam

Well you got it wrong. Documents that no one read serve a vital purpose in defining liability and responsibility. AI cannot serve this purpose as it ( till now) cannot pay for damages.


MaleficentFig7578

Risks that rarely materialize. As long as they don't materialize, it doesn't matter who wrote what. If they do materialize, your ass is on the line whether you used AI or not. May as well use AI.


wildemam

The idea is AI replacing you. AI has no ass to be on the line.


AsparagusDirect9

How is it not though? How would we measure it


Tactical_Laser_Bream

command clumsy boast rinse juggle doll direful aback apparatus quiet *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


LoriLeadfoot

Adam Tooze posted in his Chartbook substack a graph showing that the market is basically already pricing in AI as an internet-tier productivity enhancement. ([Here is a link, though it might be paywalled like the post is](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9877339-b177-4ad6-a06d-49c685a07670_1538x1208.png)) So it has to be worth the hype or we’re headed for a dot com-style correction.


PostPostMinimalist

What? That graph shows basically no change in 15 years then a very small spike at the end. In what way is being at the same value of 2009 mean a correction is imminent?


AsparagusDirect9

Yeah I also can’t wait for this interwebs fad to faze out


nordic-nomad

It’ll still be around of course, but anyone paying attention realized the dream of the internet as we knew it died a while ago. In a lot of ways it’s more now like a series of intranets that sometimes talk to each other.


etzel1200

Is this all cope, or what? I don’t understand how the value isn’t inherently obvious.


Ongo_Gablogian___

If it was able to do the things they say it will be able to then the value is obvious. But it currently doesn't do anything we couldn't already do.


complicatedAloofness

But it can or almost can do it cheaper? Duh


JohnLaw1717

I'm a believer. But if AI is capacity what I believe it is a stock market won't be necessary anymore. That's the real ironic aspect of all this.


PM_me_your_mcm

No, I don't think so?  What would be cope here?   I actually work in the field, I actively research and work on some of this technology.  I'm not sure what better statement I could make about my feelings on its value; I see it and practice it actively. What I do find somewhat annoying is the repetitive hype cycle.  We seem to be unable to do steady investment and development.  It's all boom or bust and I've seen at least 3 cycles of boom now.  It's bothersome to have a dip shit like Musk spouting drivel about how the Terminator is coming and being taken as an authority when he either A. Doesn't know any better and should just shut the fuck up, or B.  Does know and hypes shit for his investment interests.  I don't honestly know which it is, and I don't really give a shit about the people that show up late to the party looking to get rich only to get skinned alive, but I actually field requests on this shit and the number of people that come to you with a pile of data assuming you can just perform some sort of black magic and make money appear becomes obnoxious. There's a lot of value here though, and that's part of why I practice it.  But we're also at some point in a hype cycle and eventually someone actually has to pay the bills and someone asks where and exactly what the value is and there's a reckoning between prices and output.  Maybe we're at the peak, maybe we're just getting started.  Maybe the Terminator does show up someday, but it's probably not going to be within the next 5 years.


CWang

**IN ARTHUR C. CLARKE’S** famous short story “The Nine Billion Names of God,” a sect of monks in Tibet believes humanity has a divinely inspired purpose: inscribing all the various names of God. Once the list was complete, they thought, He would bring the universe to an end. Having worked at it by hand for centuries, the monks decide to employ some modern technology. Two skeptical engineers arrive in the Himalayas, powerful computers in tow. Instead of 15,000 years to write out all the permutations of God’s name, the job gets done in three months. As the engineers ride ponies down the mountainside, Clarke’s tale ends with one of literature’s most economical final lines: “Overhead, without any fuss, the stars were going out.” It is an image of the computer as a shortcut to objectivity or ultimate meaning—which also happens to be, at least part of, what now animates the fascination with artificial intelligence. Though the technologies that underpin [AI](https://thewalrus.ca/tag/artificial-intelligence/) have existed for some time, it’s only since late 2022, with the emergence of OpenAI’s ChatGPT, that the technology that approached intelligence appeared to be much closer. In a [2023 report](https://blogs.microsoft.com/on-the-issues/2023/11/29/ai-canada-artificial-intelligence-and-data-act/) by Microsoft Canada, president Chris Barry proclaims that “the era of AI is here, ushering in a transformative wave with potential to touch every facet of our lives,” and that “it is not just a technological advancement; it is a societal shift that is propelling us into a future where innovation takes centre stage.” That is among the more level-headed reactions. Artists and writers are panicking that they will be [made obsolete](https://thewalrus.ca/ai-is-coming-for-voice-actors-artists-everywhere-should-take-note/), governments are scrambling to catch up and regulate, and academics are debating furiously.


insertnamehere65

That episode in Futurama makes so much more sense now


snek-jazz

We're going to find out what's really unique about as humans - what, if anything, can't be done by a computer. Whether we really have something like a *soul* that affects what we build and create or whether all of our outputs are just a function of our prior inputs. We might not like the answers we're going to find.


MaleficentFig7578

Our economy isn't based on meaning. It's based on bullshit. Bullshit jobs, bullshit memos. AI lets bullshit workers churn out more bullshit at an ever increasing pace. They will be rewarded with the obligation to increase their bullshitting pace even more. And we'll be surrounded by so much bullshit it's futile to dig through it all for the remaining gems. And then people will stop making the gems.


mangafan96

"The simulacrum is never that which hides the truth - it is truth that hides the fact that there is none. The simulacrum is true." -- Jean Baudrillard


Equivalent-Excuse-80

I day trade as a side gig. And our entire market system based on the sizzle and not the steak.


Candid-Sky-3709

90% of everything is crap. Artificial Intellicrap just delivers these 90% garbage at ever increasing speed. https://en.wikipedia.org/wiki/Sturgeon%27s_law


MaleficentFig7578

90% of everything will be AI crap. Of the remaining 10%, 90% will still be human-generated crap.


throwaway9gk0k4k569

bullshit bot accounts on reddit: https://old.reddit.com/user/CWang


VolkRiot

Friendly reminder that being a cynic, or up-voting a cynical take on the Internet doesn't make you smart.


Elegant_Studio4374

That’s why no one upvoted your cynical comment…


VolkRiot

You made this comment before people upvoted it didn't you 😂


VolkRiot

Oh, now I have made the morons mad. That was the point all along. Please add your downvote if you are a dumbass too. If you also think everything is bullshit then please hammer that downvote button. Prove who you are


moulinpoivre

Still not convinced that the current wave of AI isn’t all hype. Sure it can do some amazing things at lightning speed, but it is not free, all of that AI requires a shit ton of data and all those data centers burn through energy constantly. I read that that in the US data centers could soon use 10% of all electricity generated. All of that costs money. You can’t turn off the data centers and still run AI. AI is propped up with just a mountain of speculative capital right now and it is rapidly eating away at all of that capital. Just a giant pit that burns energy and creates hype.


KalimdorPower

Deep learning - yes, it uses huge arrays of data. But there are also many other AI tools which may be used effectively in terms of energy and data. However, these tools require much deeper knowledge of the area.


PM_ME_A_PM_PLEASE_PM

I like how the author goes from claiming AI's greatest threat is making infinite paperclips to conflating this genuine concern with their example of the World Wide Web as some promised utopia which he already contradicted as being the case with AI. They basically put their own foot in their own mouth on that one. I hate reading articles like this where the author is clearly out of their own depth to even make a truly helpful critique. They got close but I barely found this article relevant to economics or helpful in broadening my understanding on the topic as I am driven more towards AI related work myself. I often see economists make similar mistakes where they overextend their understanding. It's true of everyone or even the tech bro's this writer utilized as critique that believe AI shouldn't be regulated. Humanity has had an exponential economic experience since the industrial revolution thanks to those that highly contributed to such ends. Economists, like everyone, often only have a fraction of the fundamental knowledge on why that's true. Due to this, I often would rather here the consensus of engineers in a particular field regarding their takes going into the future rather than an economist. They often have stronger relevant fundamental understanding. The people making such innovation in AI and its utilization in automation will tell you its limitations. The suggestion that it's an all-knowing God is just an active choice to not listen to them as many did the same to climate scientists. He then focuses more on philosophy regarding cognition and relating that to the latest versions of AI in the form of LLMs with their lack of true understanding or meaning in the words utilized rather than brute force pattern recognition. This is all to eventually suggest people are wrong to ask such models deep philosophical questions such as "what is the meaning of life?" This is all basically ignoring that anyone with knowledge of ChatGPT or such models would also suggest that's an awful question to ask such a model, so the author is basically inventing their own nonsense to critique here. For shits and giggles, I asked ChatGPT what is the meaning of life and this was its response: >The meaning of life is one of those age-old questions that has intrigued humanity for centuries. Philosophers, theologians, scientists, and thinkers from various disciplines have pondered this question, and there's no single, universally agreed-upon answer. >For some, the meaning of life is tied to religious or spiritual beliefs, where it may involve fulfilling a divine purpose, achieving enlightenment, or experiencing a connection with a higher power. Others view the meaning of life through a more existential lens, suggesting that it's up to each individual to create their own meaning through personal relationships, experiences, and contributions to society. >Existentialist philosophers like Jean-Paul Sartre and Albert Camus argued that life is inherently meaningless, but it's up to each individual to find or create meaning through their choices and actions. >From a biological perspective, the meaning of life could be seen as simply propagating one's genes and ensuring the survival of the species. >Ultimately, the meaning of life is subjective and can vary greatly from person to person. It's a question that invites introspection and contemplation, and the answer may evolve over time as individuals grow and change. To me, that reads as a responsible response that doesn't need fearmongering - and this is the mere pattern recognition bot to give it even more credit. What's funny to me is fearmongering absolutely is worthwhile in some aspects to this as the author pointed out in their infinite paperclips example or the lack of human ethical understanding while a superintelligence pursues a non-terminating trivial request. And there are aspects of our design protocol currently for AI development which bring that concern to life - as we live in a highly individualistic, short-term profit or other selfish desires driven world that is not mature enough to approach such powerful tools responsibly. Microsoft, Meta, and Alphabet certainly are more beholden to their shareholders and their interests rather than the economic or ethical consequences of the world. The world has largely already failed at a simpler problem actually as we had a similar market failure with climate change. This is where the discussion needs to be taken if we were to be serious regarding ethical critique rather than mostly fearmongering. To their credit they do suggest this through the critique of Marc Andreessen and even more fundamentally through their questioning of later on that makes it clear how vital quality democratic regulation is needed for this problem. I just think that the critique was poorly fundamentally grounded to our means of innovation/political struggle via mostly economic conflict of interests in an inequal world of power over such complex problems like this and climate change. They should have utilized that example as similar bias promoting market failure as it's certainly true for both with climate change being the simpler example to grasp on how and why that happens.


w8str3l

I asked an AI what the meaning of life was and it talked about “finding a purpose” and “challenging yourself” and “deriving satisfaction from your own efforts”, and then I asked it what was the first thing you did in the morning, and I must admit that I was shocked, shocked! how you choose to spend your precious time on this earth, however “purposeful”, “challenging”, and “satisfactory” your “efforts” might appear to you. What do you have to say for yourself, young man?


jeremiah256

If you’re familiar with futurists like Vernor Vinge and Ray Kurzweil, then you know their writings on the singularity. They argue that we are approaching, or perhaps already at, a point where the pace of technological change will be too rapid for human minds, cultures, and societies to keep up with. The internet, social media, and large language models (LLMs) are all phenomena where the best we may be able to do is ride the waves and hope not to crash. To push that this is just ‘hype’ is to ignore the massive technological changes that have already transformed our world in less than one single generation.


petesapai

The media loves listening to NVidia's insufferable CEO. Forgetting to mention the fact that his Raison d'être Is to pump up his stock. Spoiler : [It's working](https://cdn.statcdn.com/Infographic/images/normal/32358.jpeg)


randomlydancing

I'm weirded out by the ai taking all the jobs as it makes me wonder if they've actually worked at a large company before. Most large companies still operate on awful spreadsheets, unproductive people, and they still somehow still have dominant market share


Famous_Owl_840

This is why AI will be regulated in odd ways. A significant portion of or work force are employed in ‘make work’ jobs. If those jobs disappear, there will be a significant number of people unemployed. Think about all those ‘girl boss’ jobs of unskilled female graduates working remotely doing project management. Those jobs are 100% useless and are only around as a CYA for companies to have a balanced work force. If AI started eliminating even the pretense of those jobs - regulations will come down quicker than congress votes themselves raises.


PostPostMinimalist

That’s sort of exactly the point. If someone’s job is to basically summarize meetings and emails into spreadsheets…


randomlydancing

My point is that they could've been automated even before that with databases and better processes, but they haven't and it's not like these businesses are outcompeted


Vegan_Honk

It's a variant of the George Carlin joke only instead of just taking off the labels, the AI will just tell you to add it to your fruit punch.


National-Restaurant1

Train these things to work in silos on narrow, humanity-enhancing objectives and we could be on the cusp of achieving the once unimaginable. A lot of new companies and jobs can be created this way. Train these things to work generally so that they just become better than humans at everything, quickly, and we’re fucked. Fucked because we won’t be able to adapt to all of that soon enough. A few super powerful networks can be created this way and humans will just be like ants.


[deleted]

The short term (10-20 year) impact of AI on labor isn't replacing the full function of any profession. It's replacing a significant portion of any profession. How often do you read a post that boils down to "my job can't be replaced by AI because there's some things in my profession that AI can't do." We are all familiar with the 80-20 rule. Now, it's not exact that 80% of the work is done by 20% of workers, but it's probably not far off. So, let's say AI can perform 20% of the work done in your profession. That means AI could replace 80% of workers in that profession. The question workers should be asking is if AI will be able to perform 20% of their work and if they are one of the top 20% in their profession?


kantmeout

The hype from AI is coming from three separate sources. One is the developers, who have a financial stake in hyping their expensive products. The second is books, movies, and games that feature all powerful AI. They often use AI as villains, but sometimes as saviors. The third source though is philosphers. People who've looked at the abstract notion of a machine that is smarter then humans and considered the theatrical implications. Humanity's status as apex predators is tied to our status as apex thinkers. This could be threatened if we produce something smarter. If it's able to improve itself, then you have a singularity where it's intelligence increases exponentially. The implications of that are where the AI as god concept emerges. However, for the last group the hype is in anticipation of future developments.


JaydedXoX

AI won’t go away. This isn’t crypto or some flash in the pan tech, this is the second invention of the WWW. Lots of people scoffed when the dot com bubble burst, but the internet kept coming. AI will get better, like it or not, and the analytical/automation possibilities are endless. The ability to gather and crunch data, analyze large language models and recommend a solution, and then execute through robotic automation has untold possibilities. At the bare minimum, asking AI for things you used to Google search for has the ability to let you bypass paid content short term and get a real answer. Will it be now of 5 years from now, who knows? But a tool that gets smarter and trains itself should not be underestimated as to its potential. In a few years, AI won’t take your job, but someone who uses AI to be more productive than you might.


alpacante

The article doesn't say AI is going away. What are you talking about?


JaydedXoX

Not the article but the top commenters are acting like AI is a fad and a “hype”.


teflong

I'm sure they said the same thing about cars, airplanes, telephones, and the internet.  All this is, is an article that will age incredibly poorly at some point between now and 20 years from now. 


SadCauliflower1307

And the same people breathlessly talking up the earth shattering nature of AI also said the same thing about NFTs, the blockchain and the Metaverse.


mikejacobs14

Man this sub-reddit really is extremely biased. To conflate NFTs, blockchain and metaverse with AI is extremely ridiculous. AI already has tons of use cases, it is been used in translations (last year I used it to translate 5000 chapters of Chinese novels and the quality was good), code generation (I'm a software developer and this has been really useful in generating templates for me) and the list go on. We have only really started recently and the progress has been amazing.


SadCauliflower1307

2 years ago you would have had “smart” tech people breathing down your neck for doubting the near-limitless potential of NFTs and the Metaverse, now they’re a punchline. What AI has over NFTs, Blockchain and the Metaverse is that it’s not *completely* useless at face value, but given how easily the exact same figures in the tech world bought into facially worthless technologies as being revolutionary keys to the future I’ll believe the hype about AI when I see it. Generally when someone tells you “this is only the beginning” of a technology they’re invested in, it’s smart to be skeptical.


alpacante

Did you even read the article? What point do you disagree with?