What Will Remain for People to Do? The future of labor in a world with increasingly productive AI.
-
What paid work might remain for human beings to do if we approach a world where AI is able to perform all economically useful tasks more productively than human beings? In this paper, I argue that the answer is not ‘none at all.’ In fact, there are good reasons to believe that tasks will still remain for people to do, due to three limits: ‘general equilibrium limits,’ involving tasks in which labor has the comparative advantage over machines (even if it does not have the absolute advantage); ‘preference limits,’ involving tasks where human beings might have a taste or preference for an un-automated process; and ‘moral limits,’ involving tasks with a normative character, where human beings believe they require a ‘human in the loop’ to exercise their moral judgment. In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.
-
-
What paid work might remain for human beings to do if we approach a world where AI is able to perform all economically useful tasks more productively than human beings? In this paper, I argue that the answer is not ‘none at all.’ In fact, there are good reasons to believe that tasks will still remain for people to do, due to three limits: ‘general equilibrium limits,’ involving tasks in which labor has the comparative advantage over machines (even if it does not have the absolute advantage); ‘preference limits,’ involving tasks where human beings might have a taste or preference for an un-automated process; and ‘moral limits,’ involving tasks with a normative character, where human beings believe they require a ‘human in the loop’ to exercise their moral judgment. In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.
What will be left for use to do is being homeless.
-
What paid work might remain for human beings to do if we approach a world where AI is able to perform all economically useful tasks more productively than human beings? In this paper, I argue that the answer is not ‘none at all.’ In fact, there are good reasons to believe that tasks will still remain for people to do, due to three limits: ‘general equilibrium limits,’ involving tasks in which labor has the comparative advantage over machines (even if it does not have the absolute advantage); ‘preference limits,’ involving tasks where human beings might have a taste or preference for an un-automated process; and ‘moral limits,’ involving tasks with a normative character, where human beings believe they require a ‘human in the loop’ to exercise their moral judgment. In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.
Not going to read most of this paper, because it reads like a freshman thesis, and it fundamentally oversells or misunderstands the existing limits on AI.
In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.
The AI technofacists building these systems have explicitly said they've hit a wall. They're having to invest in their own power plants just to run these models. They have scores of racks of GPUs, so they're dependent upon the silicon market. AI isn't becoming "ever more capable," it's merely pushing the limits of what they have left.
And all the while, these projects are still propped up almost entirely by venture capital. They're an answer to a problem nobody is having.
Put another way, if the leaders of the AI companies are right in their predictions, and we do build AGI in the short- to medium-term, will these limits be able to withstand such remarkable progress?
Again, the leaders are doing their damnedest to convince investors that this stuff will pay off one day. The reality is that they have yet to do anything close to that, and investors are going to get tired of pumping money into something that doesn't return on that investment.
AI is not some panacea that will magically make ultracapitalists more wealthy, and the sooner they realize that, the sooner we can all move on—like we did with the Metaverse and blockchain.
-
Not going to read most of this paper, because it reads like a freshman thesis, and it fundamentally oversells or misunderstands the existing limits on AI.
In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.
The AI technofacists building these systems have explicitly said they've hit a wall. They're having to invest in their own power plants just to run these models. They have scores of racks of GPUs, so they're dependent upon the silicon market. AI isn't becoming "ever more capable," it's merely pushing the limits of what they have left.
And all the while, these projects are still propped up almost entirely by venture capital. They're an answer to a problem nobody is having.
Put another way, if the leaders of the AI companies are right in their predictions, and we do build AGI in the short- to medium-term, will these limits be able to withstand such remarkable progress?
Again, the leaders are doing their damnedest to convince investors that this stuff will pay off one day. The reality is that they have yet to do anything close to that, and investors are going to get tired of pumping money into something that doesn't return on that investment.
AI is not some panacea that will magically make ultracapitalists more wealthy, and the sooner they realize that, the sooner we can all move on—like we did with the Metaverse and blockchain.
The AI technofacists building these systems have explicitly said they've hit a wall. They're having to invest in their own power plants just to run these models. They have scores of racks of GPUs, so they're dependent upon the silicon market. AI isn't becoming "ever more capable," it's merely pushing the limits of what they have left.
While I agree that this paper sounds like a freshman thesis, I think you're betraying your own pack of knowledge here.
Because no, they havent said they've hit a wall, and while there are reasons to be skeptical of the brute force scaling approach that a lot of companies are taking, those companies are doing that because they have massive amounts of capital and scaling is an easy way to spend capital to improve the results of your model while your researchers figure out how to make better models, leaving you in a better market position when the next breakthrough or advancement happens.
The reasoning models of today like o1 and Claude 3.7 are substantially more capable than the faster models that predate them, and while you can make an argument that the resource / speed trade off isn't worth it, they're also the very first generation of models that are trying to integrate LLMs into a more logical reasoning framework.
This is on top of the broader usage of AI that is rapidly becoming more capable. The fuzzy pattern matching techniques that LLMs use have literally already revolutionized fields like Protein Structural Analysis, all the result of a single targeted DeepMind project.
The techniques behind AI allow computers to solve whole new classes of problems that werent possible before, dismissing that is just putting your head in the sand.
-
What paid work might remain for human beings to do if we approach a world where AI is able to perform all economically useful tasks more productively than human beings? In this paper, I argue that the answer is not ‘none at all.’ In fact, there are good reasons to believe that tasks will still remain for people to do, due to three limits: ‘general equilibrium limits,’ involving tasks in which labor has the comparative advantage over machines (even if it does not have the absolute advantage); ‘preference limits,’ involving tasks where human beings might have a taste or preference for an un-automated process; and ‘moral limits,’ involving tasks with a normative character, where human beings believe they require a ‘human in the loop’ to exercise their moral judgment. In closing, I consider the limits to these limits as AI gradually, but relentlessly, becomes ever-more capable.
This current iteration of "AI" is just autocorrect on steroids, so... no, no AGI yet.
There'll be a lot of work fixing the effects of vibe-coding and similar practices, for sure. -
This current iteration of "AI" is just autocorrect on steroids, so... no, no AGI yet.
There'll be a lot of work fixing the effects of vibe-coding and similar practices, for sure. -
Not yet, but it's an interesting thought experiment if nothing else. Someday, thanks to advances in robotics and computers, human labor will become largely obsolete. So the question is how do we structure our society when that happens?
The real question isn't how we structure our society if some extremely far-fetched scenario happens. The real question is how we structure our society right now that is already failing most of society the way it is structured right now.
Labor is not a necessity for people to survive, in fact most people would consider a place where their job wasn't required a utopia in terms of the enjoyment they get out of the actual labor. The real question is about wealth distribution, not labor.
-
The real question isn't how we structure our society if some extremely far-fetched scenario happens. The real question is how we structure our society right now that is already failing most of society the way it is structured right now.
Labor is not a necessity for people to survive, in fact most people would consider a place where their job wasn't required a utopia in terms of the enjoyment they get out of the actual labor. The real question is about wealth distribution, not labor.
-
But labor is a necessity to survive, and always has been. We need the production of goods and services. Of course the distribution of wealth and goods is also an issue, but somebody (or something) has to produce the things we use.
Labor is a human putting in work. Fully automated production of goods and services is already a thing for some goods and services today and some others have a much, much larger automation component than they had historically.
Don't confuse the wealth distribution mechanism (getting paid for labor) with the actual work itself.
-
Not yet, but it's an interesting thought experiment if nothing else. Someday, thanks to advances in robotics and computers, human labor will become largely obsolete. So the question is how do we structure our society when that happens?
I'm sorry but that's wishful thinking (IMO).
Don't get me wrong, there still may be a humanity when we reach a point when that's technically possible, but it'll be one more of the cyberpunk dystopia kind. -
Labor is a human putting in work. Fully automated production of goods and services is already a thing for some goods and services today and some others have a much, much larger automation component than they had historically.
Don't confuse the wealth distribution mechanism (getting paid for labor) with the actual work itself.
-
-
Not yet, but it's an interesting thought experiment if nothing else. Someday, thanks to advances in robotics and computers, human labor will become largely obsolete. So the question is how do we structure our society when that happens?
If human labor becomes obsolete, our current ruling class might attempt to just kill off all of us "undesirables".
-