pkhan’s master thesis

                                       


   introduction


     chapter 1: hyperconnected loneliness

       chapter 2: enter the haustorium

         chapter 3: ritual firewalls

           chapter 4: opaque by design


    

             bibliography





    appendix: praxis documentation

thesis drafts → the void of more



Key concepts: technological singularity, LLMs, crisis of meaning

Consider this paragraph:

I need to write something about large language models that is abstract enough to allow various interpretations, but also specific in a way that keeps it connected to the general topic of this chapter: decline of Western culture through commodification of ideas. The lack of originality in late capitalism’s cultural landscape cannot be attributed to algorithmically generated content and widely available tools generating it, at least because of chronological discontinuity, however both phenomena seem to be interconnected and overlapping -- perhaps symptoms of a larger condition, or merely a zeitgeist of our troubled times.

It was written by a human in 10 minutes, and according to X, Y of CO2 emission was produced in the process. The text looks clumsy and is hard to follow, which is obvious even for the person who wrote it. A more experienced writer could choose better words and put them together in a more elegant way, perhaps spending about half the time on the task (I generally write very slow), 

Here is the output of an open-source LLM (deepseek-r1:8b) run locally on a consumer grade laptop, prompted to “rephrase and improve” the above paragraph:

The exploration of large language models (LLMs) presents an intriguing lens through which to examine the broader cultural dynamics of late capitalism. To maintain a balance between abstract interpretation and thematic relevance, it’s essential to underscore how these technologies are not solely responsible for the decline of Western culture but rather reflect and amplify existing trends in commodification. The lack of originality in contemporary cultural production can be traced back to the logic of late capitalist commodification, where intellectual and creative labor is devalued and repurified for market consumption. This trend manifests in the proliferation of algorithmically generated content, which, while a symptom of technological advancement, is also a product of a cultural landscape that prioritizes efficiency and standardization over creativity.

This text was generated in 12 minutes, although to be fair I had to cut it in half for it to be approximately the same length as the original text — LLMs are known to be universally verbose. My MacBook Air with 8 GB of RAM wrote a more coherent text in the same time frame without me pushing the buttons, while producing X amount of CO2 — about Y times less than in combination with myself. Publicly available online model like ChatGPT would 

 Yet, at least to most people reading in 2025, it probably seems pretty clear that the value of this text is not particularly high. 

What exactly was lost from the first, human written passage, to the second, machine-generated one? It might be tempting to take the convenient trope of describing how human errors and stylistic imperfections bring about character that cold soulless machines are not capable of generating, or how the ideas behind both the example text and all those that the LLMs are trained on are exclusive to the human mind. There was,  however, something even more important that disappeared in the transition from the first text to the second one: my ability to reprocess my own thoughts into a more coherent and readable text. Not all of it, of course, and not quite in a literal sense: I did not actually forget how to rewrite text and make written words make more sense, but as these models get more advanced and accessible, using them to improve one’s writing becomes commonplace. For most internet users this procedure already feels routine, and my guess is that some of the readers could probably recognise some of their own workflow in the example described above. Everyone who ever writes any text probably experiences this — having a concept in mind but not being quite sure how to put it in a coherent and elegant way, and the option to bypass this stage seems to be too tempting to resist. 

This cognitive transformation is not unique to LLMs and writing; similar things have happened to us before, some of them even in my lifetime. In mid 2000s, I knew at least a dozen telephone numbers by heart, some of which I can still recall now. Memorising digits was common and necessary — street addresses, birthdays, passwords, we stored it all in our long term memory, until we didn’t have to anymore. This is why for people of my generation it might be easier to remember a childhood friend’s phone number or birthday than those of any of the current friends: we don’t store this kind of data in our brains anymore, we use the cloud for that. In theory, this should have freed our minds from meaningless numbers and made more space for memorable moments and creative ideas, but was it really what happened? Recent studies show that our long term memory is now impaired, and not just specifically for telephone numbers. Our brains might have shrinked,  but our clouds have grown drastically: they are now big enough to store all of our memorable moments so that we don’t have to, easily downloadable in high definition for a small monthly fee.


Whether these changes make us … is pure speculation.