부키
2026년 01월 04일
인공지능, AGI와 인간 일자리 관계 두가지 관점으로 나뉘는 중
첨부 미디어
Valuable post about <<Artificial general intelligence (AGI); sometimes called human‑level AI>>
인용된 트윗: There are broadly two ways people think about AGI and labour:
Position A is where humans get fully substituted, which is usually advanced by parts of the AI commentariat.
The argument is that if AGI is a scalable input that can do what workers do at lower cost, then the market value of human work falls. Even if humans remain physically capable, and even if adding AI raises human "physical productivity" in some narrow sense, the prices of what humans can sell can fall faster because AI floods supply. In competitive equilibrium, firms buy the cheapest effective input. Unless there is a large and persistent demand for "specifically human" labour (therapy, arts etc), wages are pushed toward the minimum people will accept; if the market-clearing wage is below social/legal/psychological floors, this shows up as unemployment rather than just low wages. All of this is in principle possible and a coherent argument, and I've written about them before.
Position B is the economics reply, which doesn't depend on 'line goes up' alone.
"AGI implies humans won't work" requires a corner solution: AI and labour must be perfect substitutes across most tasks, and compute must become cheap enough to saturate the economy. (Note that "perfect substitutes" doesn't mean "AI can do anything humans can", but that the two are interchangeable with no synergies from combination.) Standard production theory suggests a different dynamic: when two inputs are imperfect substitutes, adding more of one tends to raise the marginal product of the other: more AGI makes the remaining human contributions more valuable, not worthless.
Many substitution arguments also assume away the real constraints on scaling compute (capital, energy, materials, bottlenecks), effectively smuggling "infinitely abundant AI" into the premises. So full displacement is in principle possible, but inevitability is an overclaim. Unless AGI can do literally everything and becomes abundant enough to meet all demand, it behaves broadly like powerful automation has before: replacing humans in some uses while expanding the production frontier in ways that sustain demand for labour elsewhere.
Economists have a specific way of thinking about this which might turn out to be wrong for subtle reasons (e.g. if we truly hit the scenario where humans offer zero comparative advantage, like horses). However, the current discourse in AI world is dominated by voices who haven't even seriously considered or engaged with the mechanisms economists bring up.
Position A sometimes reasons from the limit case without defending the assumptions needed to reach it (deployment speed, cost curves, complementarity, preferences for human services, institutional response, automation of all physical processes etc). There's more friction and agency here than deterministic worst-case modelling assumes. Note also that in discussing this, I'm not even taking into account the massive welfare benefits of decreased in prices, longevity improvements, and high economic growth.
So amidst all this uncertainty, I find it irresponsible when commentators popularize memes about "total disempowerment" as foregone conclusions, as these also make implicit claims about political and institutional dynamics. The problem isn't just pessimism, it's that the vast majority of critics from the CS and futurist side don't even take the economic modeling seriously. Though equally many economists tend to refuse to ever think outside the box they've spent their careers in. I've been to some great workshops recently that being these worldviews together under a same roof and hope there will be a lot more of this in 2026.
아직 댓글이 없어. 1번째로 댓글 작성해 볼래?