chapter one → hyperconnected loneliness
At some point during my lifetime, the Internet stopped being a place we visit and turned into an environment we inhabit. When I was in middle school, going online was a ritual accompanied by weird digital sounds that gave me anxiety — or maybe it was knowing I was occupying the telephone line and could get in trouble if someone called our household. Now, as I write this chapter, I had to perform the opposite ritual of going offline: put the phone on airplane mode, disconnect the computer from wifi. Yet again I'm anxious — this time about missing a work-related message, an important notification, or someone reaching for me in cyberspace and not finding me.
The reversal is complete: what once required intention now requires its absence. Somewhere between these two anxieties lies the transformation this thesis explores. Our everyday tools have quietly become infrastructures of control, turning our daily actions into streams of readable data. What began as voluntary participation gradually shifted into structural dependency. To not be online today is to forfeit access to work, community, and reality itself. The medium no longer hosts the message — it absorbs, modulates, and commodifies it. In this condition, understanding technology means understanding ourselves anew — not as free agents, but as preconditioned subjects within a hyperconnected ecosystem that we barely chose.
Connecting People
The greatest promise of information technology was connection. The 21st century was meant to bring us closer than ever — fostering empathy, compassion, and a renewed care for one another and the planet. The World Wide Web was going to be our cozy home, for the whole species to inhabit together like a big spider family. But in the end, there was only one spider: the rest of us were errant flies, waiting to be fed on. Instead of promised unity, the world has become hyperpolarised — economically, politically, ideologically. The phrase chronically online is rapidly losing its meaning: it is no longer an affliction but an expectation; to be socially present now means to exist in some form of cyberspace at all times. When someone goes offline for even a couple of days, we start to worry. We are hyperconnected to the cloud but tragically disconnected from each other — and even more so from ourselves.
In the early 2000s, Nokia, the largest cellphone manufacturer at the time, had a catchy marketing slogan: “Connecting People”, famously accompanied by an image of two hands reaching for each other. The idea of the mobile phone as a symbol of connection has since become engraved in the mass consciousness, and most innovations in that field have been presented to us through the lens of deep (fake) social connection. Look at any Apple or Facebook commercial, and you will see the same idea repackaged and resold to us again and again — separated family members calling each other from different countries via FaceTime, coworkers collaborating productively in Microsoft’s virtual office spaces or on cheerful Zoom calls, teenagers expressing themselves creatively through Snapchat filters and animated emojis. Big Tech has been telling us the same story for almost two decades — that they are in the business of connecting people. And we are still listening, perfectly aware that it is not true.
The real agenda of these corporations has been exposed countless times, on the biggest scale. The problem is that when it comes to information, the biggest scale itself has been appropriated, bought, and paid for by these very companies a long time ago. With a sleight of hand, every revelation about Big Tech spying on users, trading private data, hijacking attention, and manipulating emotions becomes an international sensation on major media platforms, provokes a controlled outrage and ultimately gets normalised, taking its place in the postmodern media canon. Hardly anyone believes these companies prioritise the public good or care about fostering healthy social connections, but nobody really cares anymore either. It is yesterday’s news — an outdated sensation that barely deserves any attention past its short expiration date. In times of constant turmoil, at the peak of a mental health crisis, we act as the rational and social animals we are — dismissing alarmist narratives if the public consensus tells us that it is fine to do so. Even though we know that this public consensus is artificially controlled by the powers in place and utterly disconnected from what is left of our shared reality.
Too Much Information
Platform capitalism breeds information fatigue on an industrial scale. It does not merely flood us with facts; it also erodes the reflex that sorts those facts from noise. A missile hitting a hospital now appears in the same vertical slot as a meme video reel and a targeted sneaker ad. By design, the feed equalises their weight: one call to action, one tap, one undifferentiated metric of “engagement”. The result is a learned numbness that looks like nihilism but feels more like exhaustion. The deeper mechanism is a squeeze on cognitive sovereignty — the micro-interval in which relevance is decided before the thumb moves. Algorithmic commodities like infinite scroll, autoplay, and “For You” sections compress that interval, outsourcing first-pass judgment to the interface itself. Attention becomes reactive, guided by prompts that arrive pre-ranked and source-obscured. Verification lags behind fabrication; generative AI tools can mint a plausible fake in seconds, while confirmation still costs manual labour. Faced with this asymmetry, users adopt a pragmatic suspension of belief: provisional, low-commitment scanning that keeps the feed moving.
Perhaps most troubling is our resigned acceptance of misinformation itself. The emergence of sophisticated deepfakes and AI-generated content has accelerated the collapse of shared reality, and this existential threat to democratic discourse is met with a collective shrug. Such a reaction makes sense: when everything on the news is outrageous and potentially fake, nothing seems particularly worthy of outrage. The information landscape we inhabit is so vast and contradictory that exhaustion becomes a natural response. This is the essence of the post-truth condition — not that truth has ceased to exist, but that we stopped believing finding it matters. When confronted with contradictory information from multiple sources, many retreat into epistemological nihilism – the exhausted conclusion that determining truth is either impossible or simply not worth the effort. This resignation serves power by effectively neutralising accountability; if nothing can be definitively known, then nothing can be definitively condemned, and no one has to bear responsibility. Except, of course, that everyone still has to face the consequences.
Algorithms read the flattened affect as a demand for sharper novelty cues, and comply by amplifying whatever spikes arousal — be it outrage, sentimentality, or spectacle. Studies that track eye-time show evaluative effort migrating from content to meta-signals: like counts, badges, familiar stylistic cues. Users are still free to choose, but the mechanism of their choice is hijacked — it now operates through coarse filters that models can predict with ease. What platforms harvest is not minutes spent; it is the decision gap itself, repackaged as a steady, forecastable stream of behaviours and gestures. Treating the problem as one of volume alone misses these structural levers: dwell-time thresholds, ratio of endogenous to algorithmic cues, latency between publication and independent verification. Interface experiments that insert small pauses — scroll friction, no autoplay, citation previews — restore variance in link selection and reading depth without demanding total disengagement. Framed this way, the conversation moves from moral lament to design parameters that can be measured, adjusted, and contested: a practical politics of attention rather than another elegy for lost truth. This information paralysis doesn't just exhaust us — it forecloses our ability to imagine different systems entirely.
There Is No Alternative
“TINA" — There Is No Alternative — was neoliberalism’s battle cry, a slogan that captured its very essence. Its victory condition was psychological: render every path but market logic unthinkable. Mark Fisher called the result "capitalist realism," a pervasive sense that nothing can change even as everything degrades. In his own words, "capitalism seamlessly occupies the horizons of the thinkable” — any viable alternative expanding that horizon immediately falls victim to the capitalist machine the moment it's conceived. The system's greatest triumph is not just dominating the present but colonising our conception of the future. We experience this temporal foreclosure as a strange double-consciousness: intellectually aware of capitalism's catastrophic trajectories (ecological collapse, mental health epidemics, attention-fragmentation) yet behaviourally locked in a perpetual "acting as if" no viable alternative exists. This knowing-but-not-knowing generates "reflexive impotence" — that peculiar post-ironic condition where even our most radical critique gets aestheticised, transformed into just another affective commodity circulating in the marketplace of edgy ideas.
In Germany, similar neoliberal thinking manifested in the concept of alternativlos, frequently associated with Angela Merkel's policies during the eurozone crisis. This framing triggered widespread backlash, with alternativlos being declared the Unwort des Jahres ("unword" of the year) in 2010 - reflecting public frustration with the narrowing of political discourse. During this period of economic debate, the AfD emerged in 2013, initially as a Eurosceptic party challenging Germany's approach to the eurozone crisis. The “alternative” in AfD might not have referenced the “alternative” in TINA directly, but they indeed rhymed — eventually getting blended together in the public consciousness. Monopolising the very notion of alternative might be AfD’s greatest achievement — if you are not happy with the status quo, the only other option on the menu is a far-right dystopia. By effectively positioning itself as the primary opposition to the status quo, the far-right created a false binary: either accept the current system as inevitable or embrace nationalism as the only viable alternative. This narrows our political imagination precisely when we need to envision new possibilities.
This narrowing of the political field has recently found an amplifier in Elon Musk, who has used his control over X (formerly Twitter) to flirt openly with far-right narratives — including platforming AfD figures, boosting anti-immigrant disinformation, and interacting with known neo-Nazi accounts. In 2025 he famously performed a siegheil-style salute at Trump’s inauguration and openly endorsed AfD’s Alice Weidel, collapsing the idea of “free speech” into an open alignment with fascism. By framing such figures as defenders of “free speech,” Musk masks reactionary politics as neutral dissent, reinforcing the illusion that any pushback against neoliberal consensus must come from the right. In doing so, he doesn’t just reflect the capture of alternatives — he massively accelerates it, using all the power of his enormous wealth. The narrative of there being only one alternative benefits him personally — the richest person on the planet at a time of record-breaking inequality — as he can extract direct profit from redirecting public frustration and keeping the population divided, distracted, and locked into platforms he controls.
So how exactly did we get here? Ironically, the question itself hints at the answer. This question is thrown around so often that one can argue this mode of perception is now the default mode of human existence — like missing your stop while scrolling social media on a bus, only to look up and find yourself somewhere unexpected. This collective bus ride feels like a stream of oblivious distraction with scattered moments of acute self-awareness, and the number of stops keeps decreasing. Our attention is pulled in multiple directions simultaneously; we are used to being constantly distracted, and even profoundly important developments can only capture our undivided attention briefly.
Precorporation and Interpassivity
In today’s oversaturated media landscape, “exposed” means “normalised”. We tend to mistake exposure for resolution, believing that simply bringing issues to light means they have been addressed, when in reality these revelations are quickly absorbed into our collective consciousness without prompting meaningful change. This cognitive sleight-of-hand is commonplace in our information ecosystem: we get triggered by a news post, our emotional reaction is milked and monetised by the corresponding platform, and its algorithm swiftly redirects our attention to the next item, perpetuating the cycle. Most modern scandals follow a predictable arc: the shocking exposé, the flurry of outrage, and then — the big nothing. The investigation into political corruption that dominated headlines for days vanishes completely within weeks. The revelation of corporate malfeasance is soon replaced by the company's latest product launch. The war crimes documented in high definition become just another entry in an endless catalog of atrocities, with the names of devastated regions reduced to hashtags on news feeds.
Fisher's notion of precorporation crystallises the terminal condition of cultural expression under late capitalism — not merely the absorption of once-radical elements, but their anticipatory neutralisation. "What we are dealing with now," Fisher asserts, "is not the incorporation of materials that previously seemed to possess subversive potentials, but instead... the pre-emptive formatting and shaping of desires, aspirations and hopes by capitalist culture" (2009, p. 9). The system now shapes what we want before we know we want it, turning even our rebellions into pre-formatted products within capitalism's affective marketplace. The radical edge is blunted before it can cut — predetermined, calculated, and thus defanged of truly disruptive potential.
An example from popular culture that Fisher uses to illustrate this phenomenon is Wall-E — a critique of consumer capitalism and corporations, produced by Disney. A more contemporary example would be Netflix's 'Don't Look Up’ — a satire about climate denial that allows viewers to feel environmentally conscious while binge-watching on devices powered by massive server farms. The platform profits from both the content and the guilt, turning climate anxiety into subscription revenue. This type of content perfectly exemplifies what Slavoj Žižek calls interpassivity — the delegation of one’s political or emotional reactions to an external medium. The film performs our anti-capitalism for us, so that we can keep consuming with a clear conscience. Like canned laughter in a sitcom, it interacts with itself so that the viewer can remain passive yet feel involved. In recent years, the format really took off: major studios have been producing such interpassive content at extremely high rates. Black Mirror, an antology show painting techno-dystopias of platform capitalism, has been acquired by Netflix — the biggest platform in the business. Severance, a show about workplace alienation and corporate control, streams exclusively on Apple TV Plus — a platform run by one of the world’s largest corporate exploiters. HBO dramas like The White Lotus and Succession, exploring class tension and liberal hypocrisy, ultimately reinforce the status quo by letting viewers enjoy the spectacle of critique without requiring any action. Artistic merit and cultural relevance aside, these shows function as performative critique — staging rebellion as spectacle, so viewers can consume dissent without ever enacting it. The system mocks itself, so that we don't have to.
Techbroligarchs’ Engineered Desires
When a power structure assumes one's desires, it can eventually make them real if it is persistent enough. By eliminating alternatives, bombarding individuals with algorithmically fine-tuned content, and exploiting cognitive shortcuts and reward systems, it gradually reshapes what people believe they want. Over time, the boundary between internal desire and external conditioning blurs, creating subjects who mistake engineered impulses for authentic intentions, confident they arrived there on their own.
We are used to thinking of such structures as faceless systems — a cold-blooded matrix calculating its moves in pursuit of profit, with tech CEOs and billionaires merely acting as its agents. But this framing can obscure the fact that these individuals are not just passive conduits; they actively shape and reinforce the system’s logic through their ideologies, fantasies, and personal ambitions. The system may be machinic, but it is also deeply human — infused with biases, blind spots, and desires that get coded into its architecture. The agents are not simply following the system’s orders; they are the vessels through which the system’s agency is enacted, often making irrational decisions driven by ego, incompetence, spite, or mere delusion. With the normalisation mechanisms in place, these decisions are swiftly rationalised, absorbed, and retrofitted into the system’s logic. It becomes routine — almost effortless — to render even the most delusional impulses true, so long as they align with the overarching imperative of growth, engagement, or control.
Apart from enforced desires, another good example of such delusional impulses is made-up problems. The 2020 documentary The Social Dilemma (ironically, produced by Netflix) shone light on one such problem, which can loosely be formulated as "the user spends too little time on our social media platform, limiting our ad revenue". After billions of dollars were spent on the research and development of "engagement architectures," Silicon Valley’s finest minds produced a solution to this non-problem: the infinite scroll, powered by algorithms more sophisticated than anything that came before. One could argue that these algorithms were, in a way, the ancestors of what we call AI today — not just technologically but also conceptually — made to serve humans on the surface, yet stemmed from a made-up logic and an eerily misaligned set of goals. In today’s post-LLM world, the narrative — even though now easily extrapolated across endless pages of word salad — essentially remains the same: maximise engagement, predict behaviour, and optimise for retention. What has changed is the scale and opacity of these systems, which no longer just respond to human input but preempt and shape it, blurring the line between prediction and production of desire.
In a recent (January 2025) interview, the CEO of Suno, one of the biggest AI music generation companies, stated that “it’s not really enjoyable to make music now. (...) I think the majority of people don’t enjoy the majority of time they spend making music.” The remark sparked some minor pushback in the comments, but reactions were largely subdued — by now, most people are used to such statements coming from tech billionaires. It’s hardly surprising when yet another imaginary problem becomes the foundation for a billion-dollar business model. Equally unsurprising is that the business actually works: the notion that the creative process is unenjoyable, and that people would rather have AI do it for them, may seem absurd — but if enough money is poured into solving a nonexistent problem, the object of that nonexistent desire begins to materialise. Alternatives are eliminated, the once absurd becomes the default, normalised through cognitive hijacking and sheer exposure. What was once unthinkable is now inevitable — not because it made sense, but because it was made familiar.
Harvest Protocols
This dynamic does not require a moral framing to be understood and taken seriously. Whether one sees it as manipulation or innovation, the fact is it aligns tightly with measurable shifts in wealth and power. Today, the top 1% hold around 45% of global wealth — more than at any point in modern history. During the past decade, that elite captured roughly two-thirds of all new wealth, as incomes for the bottom 50% barely budged. Meanwhile, labour precarity deepened through gig platforms, and local news outlets collapsed under the pressure of algorithmic advertising control. Platform workers earn an average of $8.55 per hour while their data generates billions. A 2023 study found DoorDash drivers earn $4.82 per hour after expenses, while DoorDash's market cap exceeds $50 billion — built entirely on data extracted from their labor. Uber drivers, despite working full-time, often earn below minimum wage while Uber's algorithms optimise for company profit, not driver welfare. These shifts were not incidental — they were structurally tied to the platforms’ infrastructural logic.
Tech monopolies did not simply reflect these dynamics — they operationalised them. Revenue models based on predictive profiling and engagement maximisation required vast behavioural datasets, which in turn demanded constant interaction. This imperative shaped user experience at every level, from interface design to algorithmic curation, ensuring that attention could be monitored, nudged, and eventually harvested at scale. What emerged was not just a new business model but a new mode of extraction: one that fed off cognition itself. Like a parasite grafted to the nervous system, the platform economy embedded itself into the daily rhythms of life, drawing value not from labour or capital in the classical sense, but from the orchestration of perception and desire.
It is here that the rhetoric of freedom collapses into architecture. Beneath the surface of open platforms and user choice lies a dense mesh of incentives and constraints, calibrated to make certain behaviours more likely than others. If not through brute force, then through the quiet logic of interface design, content prioritisation, and feedback loops. The system has no interest in mind control — it only needs to control the conditions under which thinking occurs. This is how parasites are different from other predators: instead of killing the prey, they find the optimal moment to embed, extract just enough, and stay hidden. What these systems consume is not labour or capital, but the conditions of attention itself. And once the contours of perception are shaped, the rest follows. Understanding this parasitic relationship is the first step toward designing systems that serve human attention rather than extract it — restoring the micro-moments of choice that platforms have quietly eliminated. These aren't just design preferences; they're the basic conditions for mental agency in a hyperconnected age.
Chapter Conclusion
The hyperconnected loneliness of our digital age reveals itself not as a simple matter of too much screen time, but as the systematic capture of agency. Platform capitalism has succeeded where traditional forms of control have failed — not by restricting our choices, but by colonising the very moment when those choices are made. Like a parasitic vine that maintains the appearance of a healthy host while redirecting its life force, platforms maintain the surface aesthetics of connection while siphoning our capacity for autonomous decision-making. This isn't just a social problem but a personal one: it alters our capacity to decide, imagine, and choose at the most granular level of daily life.
This chapter argues that the hyperconnected network we inhabit is not an infrastructure of communication, but one of extraction — and the resource being extracted is neither data nor attention in their raw forms, but the micro-intervals of self-direction that precede human decisions. The "information fatigue" and "precorporation" we experience are symptoms of this deeper extraction — evidence that our ability to author our own responses to the world has been quietly rerouted through algorithmic intermediaries. The next chapter will examine how this parasitic relationship evolved from the internet's originally rhizomatic structure, turning to botany to reveal the specific mechanisms by which haustorial extraction systems take hold and sustain themselves.