AI’s Likely Future (at this point)
Many futurisms focus on the augmentation of individual human intelligence, and that is certainly the positive outcome that we all hoped for the information revolution in general and AI in particular. But we have to confront the fact that the current abilities of AI largely depend upon having absorbed the output of hundreds of thousands, if not millions, of individuals, none of whom received anything in return, except for vague promises of an “empowered” future.
Over the next ten years, AI will largely serve large organizations and that will mean only more trouble for working Americans and will lead ultimately to a drop in creativity and innovation. As more organizations feel themselves obligated to incorporate AI into their business, they will do so at the expense of hiring new employees, whose work they imagine can be readily replicated by AI. Moreover, many of these organizations are of such a scale and such a nature that their use of AI, which will be under-informed (because over-hyped), will often produce negative results for end users.
The adverse effects of applying algorithmic solutions to human-complex problems has already been established across a number of systems. Sentencing software which was supposed to have made the process more objective and fair turned out to be racist because it was trained on prior cases. The same has been revealed in the insurance industry’s use of algorithms either to deny claims for health care or to reject policies based on something the algorithm did not understand. Attempts to appeal errors of fact were refused because the software said so.
Large language models, the current instantiation of AI in the public imagination, are perhaps more robust and subtle, because of the sheer size of the data upon which they have been built, than their more obviously statistical learning cousins, but they are still statistical machines. Nothing more. But the results they produce seem so human: creating a chat interface for GPT may have been the most innovative marketing move of the early twenty-first century. Too many consider them to be as good as humans despite the fact that humans develop the same competency with far less data and computational power.
Humans are also more context-aware and responsive to nuances of interaction. While the adaptability of AI to a wide variety of situations has been impressive, we have already seen repeated instances of the cracks that begin to show at boundary cases — we must never forget that the average is a statistical fiction and does not represent the fullness of lived reality.
That is, too many do not understand what AI is and in the process grant it human kinds of understanding which it does not possess. Much of the responsibility for this lies at the feet of the large corporations who own the technology and who are eager to capitalize, quite literally, on their investment, and on other institutions seemingly frozen by the rapidity of AI’s deployment.
Combine this with organizations keen to rid themselves of the workers who do the kinds of repetitive tasks that automation largely does well, and you have a perfect storm of sellers and buyers. But how much of our lives do we want to be determined, in effect, by automation? So long as a person remains in the loop, there is a glimmer of hope that empathy may come into play. There is no such hope with AI.
We may very well see the end of so-called bull-shit jobs, the jobs that seem on their surface to be meaningless either because of their repetitive or pass-through nature. But bullshit creates two things that are important to innovation: boredom and friction. Without a chance to be paid to be frustrated or paid to daydream, there will be less opportunity for creative individuals to find entirely new categories of products and services. We know how creative economies work, and they don’t work if the source of creativity slowly dries up so that only the very top economic tier has time, and permission, to think.
And while some might argue that it won’t be long before AI achieves divergent thinking, they miss that the one important dimension of creativity is in acceptance. If fewer people are working, the market for products will be smaller, decreasing the overall creativity of the organizations and the society which they serve.
Limiting our scope to the next ten years, from 2025 to 2035 and to the American scene, it seems clear that AI’s “augmentation” of the human ability to process information and make decisions will largely be institutional in nature and that the impact will not be one we desire. We can only hope that enough independent thinkers and practitioners continue to lurk in universities and small businesses that real innovation will continue to percolate and make possible the kind of AI revolution so many dream of.
My fear is that, given that the resources required to “optimize” AI largely lie with larger institutions, and, given the American policy environment, at least currently, for those institutions to be privately held and thus optimized for profit and not for public good, individuals will more often than not be the object of AI and not the subject. Observing a similar emptying of a workscape in the face of large-scale automation thirty years ago, Wendell Berry wondered if we had a good answer to the question what are people for?
The real AI misinformation crisis isn’t about deepfakes — it’s about who controls the wealth and power to shape what everyone believes. (https://bsky.app/profile/mychal3ts.bsky.social/post/3lfi74rjmfc2x)