The Ghost in machine: AI’s subtle annihilation of self -Md Din Islam
“In the Western world, rising ‘deaths of despair’ and declining birth rates result from complex factors. One possible cause is what sociologist Richard Sennett calls the ‘spectre of uselessness.’
This feeling arises not only from work-related redundancy but also from a deeper existential sense of being crowded out in an already occupied world. Ordinarily, human society provides a welcoming context. You’re born into a culture that shapes your language and shares stories of your origins. Generations of love and care create this meaningful world. However, when life seems dictated by impersonal forces, that sense of love and continuity is disrupted.”
I attended a small dinner in Grand Rapids, Michigan, where I sat next to a man whose daughter had recently married. As the wedding day approached, he wanted to give a heartfelt speech at the reception. Finding the right words for such an occasion can be challenging, so he turned to ChatGPT for help. The AI provided a decent wedding toast, perhaps even better than what he could have written himself. However, in the end, he chose to compose his own speech.
This decision struck me as significant. By using the machine-generated text, he would have distanced himself from this important moment in his daughter’s life. It would have been like not showing up for her wedding. It reminded me of Tocqueville’s observation about America’s trajectory towards an “immense tutelary power” that seeks to make life easier for us.
Language is essential to our humanity. Unlike large language models (LLMs) or parrots, we care about finding the right words. We want to express significance accurately. For instance, crafting a wedding toast involves capturing the essence of your relationship with your daughter—truthful, pleasing, and perhaps sprinkled with humor. By balancing warmth without becoming overly sentimental, you aim to create an intimate and memorable moment for everyone present.
When a father sits down with pen and paper, he strives to capture the essence of his daughter—the truth seen through the unique lens of a parent. This task is especially significant during pivotal moments in her life. As Charles Taylor suggests, finding the “right word” allows us to bring a phenomenon into clear view, revealing its depth and significance.
We also engage in self-articulation, shaping our own narrative over time. Looking back at our younger selves, we may cringe at past actions or view them with newfound understanding. Our internal monologues help us process regrets and growth.
But what if we outsourced such personal expression? Large language models (LLMs) like ChatGPT analyse vast Internet data to predict the next word based on context. They represent a statistical centre of gravity for language, encompassing everything from great literature to marketing jargon. Yet relying on an AI for personal articulation risks erasing our humanity.
Talbot Brewer calls this phenomenon “degenerative AI.”
These language machines aim at our essence, challenging our role as beings created in the image of God—the Word. It’s a profound question to consider as we navigate the intersection of technology and self-expression.
The concept of self-erasure through assimilation into a mass, rather than a genuine community, isn’t unique to large language models (LLMs). Philosophers like Heidegger, Kierkegaard, and Tocqueville observed this phenomenon earlier. Around the turn of the millennium, we were intrigued by the “wisdom of crowds” and the potential of hive minds. The idea was that superior global intelligence would emerge from the collective Web—a meta, synoptic force transcending individual perspectives.
In 2006, Jaron Lanier discussed the trend of removing human traces from online content. Consensus Web filters assemble material from aggregator sites, creating a simulated appearance of content as if it were a supernatural oracle. We now consume what algorithms derive from other algorithms based on what collectives selected from the writings of mostly anonymous amateurs.
This shift isn’t limited to the online realm. The fetishization of aggregation influences decision-making in government, corporations, and universities. Lanier, once asked to propose solutions, now finds himself tweaking collective essays alongside other consultants. The erosion of individual agency raises important questions about our humanity and the role of technology.
Lanier suggests that collectivism appeals to large organisations due to institutional reasons. The idea is that individuals shouldn’t bear risks or responsibilities. In our uncertain times, where liability concerns abound, institutions lack loyalty to executives or lower-level members. People within organisations often fear saying the wrong thing, so they find safety in collective tools like wikis.
Lanier’s experience with such tools reveals a loss of insight and nuance. Official or normative beliefs tend to dominate. At a gathering where Tal Brewer discussed “degenerative AI,” sociologist Joseph E. Davis noted that AI fills voids left by diminished human judgment. Education becomes mere information exchange, lacking authority and care. Medicine follows guidelines, sometimes with worse outcomes than experienced practitioners’ judgments. Even dating apps optimise mate selection through criteria and social media integration.
Let’s step back even further, before the internet era, to explore how AI reflects a broader trend in democratic societies. Kierkegaard, writing in the 1840s, observed a curious phenomenon related to newspapers:
People today engage in sensible conversations, yet there’s an anonymity to them. Our judgements are so objective and all-encompassing that it hardly matters who expresses them. In Germany, even lovers consult phrasebooks, reducing their interactions to an impersonal level.
This “going meta” erases personal perspectives, turning us into representatives of the general public. Relationships lose depth. For instance, the fear of schoolmasters fades, replaced by discussions about education. The whole age becomes a committee, and even parent-child dynamics lose their intensity.
Kierkegaard believed that genuine attachments and rebellions arise from differentiated authority. But fake egalitarianism tempts us to avoid this task. As AI becomes more prevalent, we must resist letting it replace our subjectivity. Like the father at his daughter’s wedding, we retain the freedom to refuse.
Recent Comments