The strongest critique of “AI will fix it” is not anti-technology

Artificial intelligence is increasingly marketed as a remedy for problems in education, agriculture, employment, and public service delivery. That framing is seductive because it compresses messy social failures into a tractable engineering challenge. If institutions are slow, underfunded, or fragmented, the promise of a responsive model seems almost irresistible.

But an essay published by Rest of World argues that this framing misses the central reality of social systems: technical capability alone is not enough. Even sophisticated AI tools need human support, institutional capacity, and local accountability if they are going to do more than generate impressive demos.

The article, written by Cornell researchers Deepak Varuvel Dennison and Aditya Vashistha, does not deny AI’s genuine potential. It explicitly acknowledges growing evidence of productivity gains and the appeal of AI in both private and public sectors. Its argument is narrower and more important: deploying AI in underserved communities is not the same thing as solving their problems.

The contradiction at the center of AI-for-good

The essay highlights a structural tension. AI is often presented as a tool for addressing inequality, exclusion, and service gaps. Yet the systems themselves are shaped by extractive supply chains, concentrated power, and existing inequities. Drawing on themes associated with books such as AI Snake Oil and Atlas of AI, the authors position AI not as a neutral software layer, but as a socio-technical system built on natural resources, human labor, and entrenched institutions.

That matters because the communities most often targeted by “AI for social good” projects are also the communities most likely to bear the costs of poorly designed interventions. A model that appears efficient from a distance may still fail locally if it ignores language, trust, access, governance, or the human intermediaries required to act on its outputs.

The core question, then, is not whether AI can help. It is what conditions must exist for it to help in a durable and accountable way.