Shopify CEO Tobi Lütke tells employees to prove AI can’t do the job before asking for resources.
-
This post did not contain any content.
Let's all just make new companies that unionized scooped stoves bring all our coworkers.
In this example that CEO isn't needed
-
This post did not contain any content.
I develop AI agents rn and have yet to see one that can perform a real task unsupervised on their own. It's not what agents are made for at all - they're only capable of being an assistant or annotate, summarize data etc.
So his take on ai agents doing work is pretty dumb for the time being.
Though AI tool use proficiency test is very much unavoidable, I don't see any software company not using AI assistants so anyone who doesn't will simply not get hired. Its like coding in notepad - yeah you can do it but its not a signal you want to send to your team cause you'd look stupid.
-
Which humans can be far better than in terms of just directly following the assigned task but does not factor in how people can adapt and problem solve.
How's that annoying meme go? Tell me that you've never been a middle manager without telling me that you've never been a middle manager?
You can keep pulling numbers out of your bum to argue that AI is worse. That just creates a simple bar to follow because... most workers REALLY are incompetent (now, how much of that has to do with being overworked and underpaid during late stage capitalism is a related discussion...). So all "AI Companies" have to do is beat ridiculously low metrics.
Or we can acknowledge the real problem. "AI" is already a "better worker" than the vast majority of entry level positions (and that includes title inflation). We can either choose not to use it (fat chance) or we can acknowledge that we are looking at a fundamental shift in what employment is. And we can also realize that not hiring and training those entry level goobers is how you never have anyone who can actually "manage" the AI workers.
how you never have anyone who can actually “manage” the AI workers.
You just use other AI to manage those worker AI. Experiments do show that having different instances of AI/LLM, each with an assigned role like manager, designer, coding or quality checks, perform pretty good working together. But that was with small stuff. I haven't seen anyone wiling to test with complex products.
-
what laptop? ^* is what I said
Still have mine gathering dust when one american startup laid me off 1 day before I had to be legally granted my equity shares and they had the audacity to ask me to arrange the return lmao
-
This post did not contain any content.
Ah yes more paperwork is certainly going to make your employees more productive. Why don't you also require them to prototype if kicking a rock against the wall 10 times does the job, instead of actually letting them do the job?
-
I develop AI agents rn and have yet to see one that can perform a real task unsupervised on their own. It's not what agents are made for at all - they're only capable of being an assistant or annotate, summarize data etc.
So his take on ai agents doing work is pretty dumb for the time being.
Though AI tool use proficiency test is very much unavoidable, I don't see any software company not using AI assistants so anyone who doesn't will simply not get hired. Its like coding in notepad - yeah you can do it but its not a signal you want to send to your team cause you'd look stupid.
Honestly, AI coding assistants (as in the ones working like auto-complete in the code editor) are very close to useless unless maybe you work in one of those languages like Java that are extremely verbose and lack expressiveness. I tried using a few of them for a while but it got to the point where I forgot to turn them on a few times (they do take up too much VRAM to keep running when not in use) and I didn't even notice any productivity problems from not having them available.
That said, conversational AI can sometimes be quite useful to figure out which library to look at for a given task or how to approach a problem.
-
AI is pretty good at spouting bullshit but it doesn't have the same giant ego that human CEOs have so resources previously spent on coddling the CEO can be spent on something more productive. Not to mention it is a lot less effort to ignore everything an AI CEO says.
-
Dear CEOs: I will never accept 0.5% hallucinations as “A.I.” and if you don’t even know that, I want an A.I. machine cooking all your meals. If you aren’t ok with 1/200 of your meals containing poison, you’re expendable.
Humans or even regular ass algorithms are fine. A.I. can predict protein folding. It should do a lot else unless there’s a generational leap from “making shitty images” to “as close to perfect as it gets.”
Cooking meals seems like a good first step towards teaching AI programming. After all the recipe analogy is ubiquitous in programming intro courses. /s
-
Employees should start setting up an AI to prove it can do Tobi Lutke's extremely difficult job of making a small number of important decisions every once in a while.
Can you prove that he makes any important decisions?
-
...
What error rate do you think humans have? Because it sure as hell ain't as low as 1%.
But yeah, it is like the other person said: This gets rid of most employees but still leaves managers. And a manager dealing with an idiot who went off script versus an AI who hallucinated something is the same problem. If it is small? Just leave it. If it is big? Cancel the order.
The error rate for human employees for the kind of errors AI makes is much, much lower. Humans make mistakes that are close to the intended task and have very little chance of being completely different. AI does the latter all the time.
-