I unfortunately work with AI and actually understand how it works. It’s going to replace workers the same way that cocaine replaces workers.
It’ll make some knowledge workers moderately more productive but that excess will be absorbed like with any other tool and we’ll just do more shit as a society at the expense of continuing to destroy the environment.
Once the bubble bursts and things calm down there will probably be some job growth as the economy figures out how to better utilize these new tools. It’s like if you invented a machine that could frame 60% of a house and brilliantly declared you’d fire all the framers but then realized you’re now building a lot of houses and need more framers than before to finish the remaining 40%.
Can confirm. It’s not AI but probably 80% of my job is just emailing other people to do shit, emailing other people status updates about their work, and verifying their completed work which is frequently wrong. It sucks.
IMO, the only thing to be taken seriously with text generators should be natural language processing.
take this fat block of text and give me a bullet point list.
what are synonyms for X?
copy-paste a big TOS and tell me the key takeaways that are anti-customer.
take these documents and make one coherent document about one page long.
etc.
The problem is that even with things like this, it frequently fails because it hyperfixates on some details while completely glossing over others, and it’s completely random if it does that or if it’s good, and this uncertainty basically necessitates that you check everything it outputs, negating much of the productivity that you gain.
I once used it for a Python script, and I used one part out of three generations only. One regex function ended up in my real script, but I got the idea to use regex from it. And I used its output, which actually worked.
You are thinking of office work, but there are a LOT of jobs that will be permanently replaced by AI-driven robotics, like fast food workers, retail shelf stockers, drivers, warehouse work, etc. Those are workers that can’t be easily trained UP, and many will likely become permanently unemployed.
I don’t buy that. There’s little reason to automate those jobs because the labor is so cheap. And as someone who has worked most of those jobs in the past, most of those workers could be easily trained for different jobs; most are actively taking it upon themselves to train to get out of them.
Labor is cheap? Most cities are approaching $15 an hour, and even those immoral states that keep it at the Federal minimum of $7.75, a robot is still going to be cheaper in the long run. Then there are benefits, payroll taxes, personal issues, schedules, etc. People are a pain in the ass, and expensive in a lot more ways than money.
Besides, it almost certainly won’t be up to the franchisee. When corporate decides that they can be more efficient and more PROFITABLE with automation, the stores will go along with it, whether they like it or not.
It’s not an if, it’s a when. It’s definitely going to happen.
I think you might be underestimating the costs of upkeep and repair of those robots. The McCorps will have to figure that piece out before they can go balls-deep on automation.
Respectfully, you don’t know what you’re talking about.
I am an accountant for a company that has both types of manufacturing. The robot-heavy factories are so much more of a pain in the ass than the people-heavy ones.
Per plant, on average I spend more fixing and maintaining robots than I do on labor at the other plants.
People are stupid but easy to train.
Computers are smart but hard to train.
IDK why y’all think the computers will be easier to tame but hey, feel free to compete with everyone else with one arm tied behind your back.
I’m trying to figure out why everyone is so mad about AI?
I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing). So I still can’t figure out why theres so much push back, is everyone using it extensively and reached a dead end in what it can do?
I’m working with people that seem to try to offload a lot of their work to AI, and it’s shit, and making the project take longer and shittier. Then they do things like write documents in AI and expect people to read that nonsense, and even use AI to send long, useless Slack messages. In short, it’s been detrimental to the project.
There are many reasons. My biggest problem with it is that it enables the productions of a incredible deluge of cheap shitty content (aka slop), sufficient to drown out a lot of more interesting decent work.
This is coumpunded by big tech having decided that slop is preferable to real content. This leads to the general feeling that I’m drowning in an ocean of shit, and thus I dislike AI.
It doesn’t reason, and it doesn’t actually know any information.
What it excels at is giving plausible sounding averages of texts, and if you think about how little the average person knows you should be abhorred.
Also, where people typically can reason enough to make the answer internally consistent or even relevant within a domain, LLMs offer a polished version of the disjointed amalgamation of all the platitudes or otherwise commonly repeated phrases in the training data.
Basically, you can’t trust the information to be right, insightful or even unpoisoned, while sabotaging your strategies and systems to sift information from noise.
EtA: All for the low low cost of personal computing, power scarcity and drought.
I unfortunately work with AI and actually understand how it works. It’s going to replace workers the same way that cocaine replaces workers.
It’ll make some knowledge workers moderately more productive but that excess will be absorbed like with any other tool and we’ll just do more shit as a society at the expense of continuing to destroy the environment.
Once the bubble bursts and things calm down there will probably be some job growth as the economy figures out how to better utilize these new tools. It’s like if you invented a machine that could frame 60% of a house and brilliantly declared you’d fire all the framers but then realized you’re now building a lot of houses and need more framers than before to finish the remaining 40%.
That seems to result in a higher burn out rate. The worker had to do more soul crushing check and verify work instead of doing knowledge work.
Can confirm. It’s not AI but probably 80% of my job is just emailing other people to do shit, emailing other people status updates about their work, and verifying their completed work which is frequently wrong. It sucks.
IMO, the only thing to be taken seriously with text generators should be natural language processing.
The problem is that even with things like this, it frequently fails because it hyperfixates on some details while completely glossing over others, and it’s completely random if it does that or if it’s good, and this uncertainty basically necessitates that you check everything it outputs, negating much of the productivity that you gain.
I once used it for a Python script, and I used one part out of three generations only. One regex function ended up in my real script, but I got the idea to use regex from it. And I used its output, which actually worked.
Yeah, it accidentally gets stuff right on occasion.
You are thinking of office work, but there are a LOT of jobs that will be permanently replaced by AI-driven robotics, like fast food workers, retail shelf stockers, drivers, warehouse work, etc. Those are workers that can’t be easily trained UP, and many will likely become permanently unemployed.
I don’t buy that. There’s little reason to automate those jobs because the labor is so cheap. And as someone who has worked most of those jobs in the past, most of those workers could be easily trained for different jobs; most are actively taking it upon themselves to train to get out of them.
Labor is cheap? Most cities are approaching $15 an hour, and even those immoral states that keep it at the Federal minimum of $7.75, a robot is still going to be cheaper in the long run. Then there are benefits, payroll taxes, personal issues, schedules, etc. People are a pain in the ass, and expensive in a lot more ways than money.
Besides, it almost certainly won’t be up to the franchisee. When corporate decides that they can be more efficient and more PROFITABLE with automation, the stores will go along with it, whether they like it or not.
It’s not an if, it’s a when. It’s definitely going to happen.
x
I think you might be underestimating the costs of upkeep and repair of those robots. The McCorps will have to figure that piece out before they can go balls-deep on automation.
Automation is going to be much more efficient, and therefore much more profitable, than human employees. Repair and maintenance will be negligible.
Respectfully, you don’t know what you’re talking about.
I am an accountant for a company that has both types of manufacturing. The robot-heavy factories are so much more of a pain in the ass than the people-heavy ones.
Per plant, on average I spend more fixing and maintaining robots than I do on labor at the other plants.
People are stupid but easy to train. Computers are smart but hard to train.
IDK why y’all think the computers will be easier to tame but hey, feel free to compete with everyone else with one arm tied behind your back.
I’m trying to figure out why everyone is so mad about AI?
I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing). So I still can’t figure out why theres so much push back, is everyone using it extensively and reached a dead end in what it can do?
Give me some red pills!
I’m working with people that seem to try to offload a lot of their work to AI, and it’s shit, and making the project take longer and shittier. Then they do things like write documents in AI and expect people to read that nonsense, and even use AI to send long, useless Slack messages. In short, it’s been detrimental to the project.
There are many reasons. My biggest problem with it is that it enables the productions of a incredible deluge of cheap shitty content (aka slop), sufficient to drown out a lot of more interesting decent work.
This is coumpunded by big tech having decided that slop is preferable to real content. This leads to the general feeling that I’m drowning in an ocean of shit, and thus I dislike AI.
It doesn’t reason, and it doesn’t actually know any information.
What it excels at is giving plausible sounding averages of texts, and if you think about how little the average person knows you should be abhorred.
Also, where people typically can reason enough to make the answer internally consistent or even relevant within a domain, LLMs offer a polished version of the disjointed amalgamation of all the platitudes or otherwise commonly repeated phrases in the training data.
Basically, you can’t trust the information to be right, insightful or even unpoisoned, while sabotaging your strategies and systems to sift information from noise.
EtA: All for the low low cost of personal computing, power scarcity and drought.