In my previous series of articles, I wrote a lot about unsolicited information, algorithms, overstimulation, and the unusual urban environment that somehow pulls us out of our “natural” state of being.
This time, I want to take a look at something that’s somewhat related to all of this.
I’m talking about technology.
And as I mentioned earlier, we won’t be pointing fingers. Blaming any specific aspect of human culture is pointless and leads nowhere.
And despite what some people might assume, I am not anti-technology either. Quite the opposite. I love technology. I spend absurd amounts of time using it. I write these articles using it. Most of my interests exist because of it. Technology itself is not “the problem.” In fact, I do not even think there is a clearly definable problem with some easy solution waiting around the corner.
I have a feeling…
I just have a feeling that the boundary of what it means to be human is continuously being postponed further and further away. It’s nothing new. But I’m not thrilled about it. I have mixed feelings about this, and I’m not alone. Many others share these feelings.
We don’t even need to dwell too much on AI, our possible future, or hypothetical scenarios. All we need to do is look back a few years to our recent past, to our adolescence (at least for those of us who are younger), and reflect on the time when we were growing up.
How are we different from “the older generation” that “jumped on the bandwagon” of technology?
Maybe that’s a generalization, but older generations often approached the internet with optimism and techno-utopian excitement. For them, the internet was a novelty. Exploration. Freedom. Possibility. “The future is here.” They experienced the digital revolution as something external entering their lives gradually.
But people from my generation – especially Gen Z – grew up inside the system after the novelty phase was already over. We did not discover social media. We were shaped by it during development.
That distinction matters enormously.
Social psychologist Jonathan Haidt (author of the book The Anxious Generation, which you should definitely read, or at least listen to one of his podcasts 🙂) describes this phenomenon through the concept of the “phone-based childhood.” His argument isn’t just some boomer-style criticism on how teenagers started using smartphones. His argument is that the developmental environment itself changed. Childhood became increasingly mediated through permanent connectivity, algorithmic content exposure, social comparison, quantification through likes and metrics, and forms of continuous peer visibility that resemble ambient social surveillance.
And honestly, I think many young people intuitively understand this already without reading a single study.
A lot of us experienced the psychological consequences firsthand. Anxiety, compulsive comparison, attention fragmentation, doomscrolling, social validation loops, the strange feeling of permanently performing yourself online – these things were not abstract cultural theories for many people in my generation. They were developmental conditions.
Personally, social media affected me psychologically much more than I was willing to admit at the time. And I know many people around the world who would say something similar.
What sometimes surprises me is how easily many older people still underestimate the intensity of this experience, perhaps because they encountered these systems as adults rather than during psychological development itself.
I recently saw someone criticizing proposed regulations that would restrict social media access for younger teenagers and explaining how children could bypass these “stupid rules”. And honestly, while I am not particularly enthusiastic about regulation as a universal solution (I don’t even think regulations are the right solution for anything), the casualness of that attitude felt deeply strange to me. It struck me as literally the same as releasing instructions on how to get around the ban on teenagers injecting heroin.
Whatever. The guy who wrote this was over 40, so he definitely didn’t go through puberty with Instagram.
However, we already recognize that certain environments, substances and systems can profoundly affect developing brains, yet when it comes to algorithmically optimized digital environments designed to capture attention, shape behavior and maximize engagement, many people still talk about them as if they were psychologically neutral.
I do not think they are neutral at all.
I like my generation mainly because it simply realizes this. Gen Z is probably the most digitally immersed generation in human history, but also among the most digitally exhausted.
And that is precisely what is evident in our current approach to technology. Not to the internet, not to social media, but to the overall system we now live in – the one that shapes us and guides us in a certain direction.
You can already see the psychological and cultural consequences of this everywhere. Digital detox culture, dumb phones, nostalgia for offline life, anti-algorithm sentiment, NEETers, Tang ping and the rejection of hustle culture and constant productivity.
The relationship people have with the internet no longer feels purely enthusiastic or optimistic to me. It feels much stranger than that – as if people simultaneously distrust these systems, feel exhausted by them, joke about being trapped by them, and yet cannot fully imagine social existence outside of them anymore.
And AI is arriving precisely into this psychological atmosphere.
That is important.
Because unlike previous generations, younger people already experienced firsthand that technologies marketed as liberating and connective can also become psychologically corrosive. We know what it feels like when something enters society as entertainment or a magical problem-solving tool and slowly becomes infrastructure.
And because of that, many younger people react to AI with much more ambivalence than older generations.
Especially when AI starts entering deeply personal domains.
Not just work, but life itself. And many people are genuinely excited about this.
But personally, I find parts of it deeply disturbing.
So am I a tech hater? No, I’m not. But I realize the price I’m paying.
For example, I use things like Whoop or biometric trackers occasionally and I find them incredibly interesting as experimental tools. I love data. I love biohacking. I love observing systems. But I also know I could never live permanently in that state. If I measured my sleep quality, heart rate variability, stress response, and recovery metrics every day for years, I think I would slowly go insane.
At some point optimization itself becomes pathological.
The same applies to AI life-management systems. Some people become genuinely amazed by the idea that an AI could optimize their routines, organize their habits, track their productivity and help them achieve goals more efficiently.
But my emotional reaction to this is almost the opposite.
I feel horror.
Because something about willingly outsourcing more and more layers of human existence to systems feels psychologically unnatural to me.
And yes, I know people will immediately say:
“Humans always feared new technologies.”
“They said the same thing about books.”
“They said the same thing about electricity.”
“They said the same thing about television.”
And technically, they are right.
Every technological revolution changes consciousness to some extent.
The invention of writing changed memory. Printing changed thought distribution. Electricity changed sleep patterns. The internet changed communication. Social media changed social identity.
Of course human psychology continuously adapts. But I think these arguments often completely miss the point.
Because adaptation alone does not answer whether something feels psychologically healthy, embodied or human.
Humans can adapt to many things. We adapt to cities. To pollution. To propaganda. To surveillance. To overstimulation. To loneliness. To artificial environments.
However, adaptation is not proof of health. And honestly, what I am describing is not purely ideological or intellectual. It is visceral.
There is a very specific feeling I get after spending too much time interacting with screens, feeds, algorithms, and digital systems continuously. A feeling of disembodiment. A strange psychological flattening. Like reality becomes thinner somehow.
And the weirdest part is that I genuinely love the internet.
I love access to information. I love global communication. I love digital creativity. I love technological experimentation.
But prolonged immersion inside digital environments often creates a subtle feeling of disconnectedness that becomes impossible for me to ignore.
The real world is not inside Facebook, inside Instagram, inside terminal windows, inside productivity dashboards or endless simulations of sociality.
And because of that, I increasingly feel the need to balance all of this consciously in my everyday life. I spend more time walking outside, going into nature, exercising, meditating, doing yoga, staying away from screens for longer periods of time and generally trying to remain mentally present in physical reality instead of permanently absorbed in digital environments. I do not see these things as some mystical solution to modern life, but I noticed that without them, prolonged immersion in online spaces slowly starts affecting me psychologically in a way that is difficult to describe precisely. Everything begins to feel flatter, noisier, less real and somehow disconnected from immediate human experience.
Jean Baudrillard, fuck you!
I shouted those words in my dream as soon as I realized it was a dream and not reality.
But let’s get to the point.
When it comes to the internet, artificial intelligence, and social media, I often find myself thinking of the French philosopher Jean Baudrillard, whom you should definitely get to know.
Baudrillard’s idea of the hyperreal is often misunderstood as some kind of childish concept like “The Matrix” or “a simulation.” But that is not really the point. The hyperreal is not simply an illusion pretending to be reality. It is a condition where the distinction between reality and representation becomes increasingly impossible to separate.
The simulation becomes more real than the real – the representation of things gradually becomes more socially meaningful, emotionally influential, and psychologically dominant than the thing itself.
For example, people increasingly experience places through Instagram before they ever physically visit them. Entire restaurants, cafés and tourist destinations are now designed partially around how they will look online rather than how they actually feel to inhabit physically. Sometimes people travel somewhere mainly to reproduce an image they have already seen online. The representation comes first. Reality follows afterward almost as confirmation.
The same thing happens with identity. People no longer simply express themselves online. Over time, they begin subtly adapting themselves to what is visible, attractive, shareable and algorithmically rewarded. The representation starts feeding back into the construction of the person. You are not only living your life anymore; part of your mind continuously observes how your life appears externally.
And this process slowly changes perception itself.
Experiences begin competing with their representations. Sometimes the photographed sunset feels socially more important than the actual sunset. The online image of friendship can become more active than friendship itself. Visibility is replacing presence. Metrics are replacing feelings. Documentation is replacing memory.
Hyperreality is therefore not obviously fake. In fact, it feels completely normal from the inside. The representation blends so seamlessly with ordinary life that eventually people stop noticing the distinction entirely.
Artificial intelligence pushes this process even further because the system no longer merely filters, ranks, or amplifies reality. It increasingly generates reality itself.
Social media still depended on human beings producing content, images, opinions and emotional expression. AI changes the situation because now the simulation itself becomes capable of speaking back.
Text, images, conversations, emotional support, companionship, advice, interpretation and even forms of intimacy can now be generated synthetically without any direct human presence behind them at all.
And the unsettling part is that many of these simulations function well enough psychologically that the distinction between “real” and “artificial” interaction begins mattering less emotionally than we might expect. We are losing our humanity, and even though we feel that “this isn’t right,” it keeps happening.
People already form emotional attachments to AI companions. Others use AI systems for therapy, reflection, life advice or emotional regulation. AI-generated images increasingly shape aesthetic expectations despite depicting events, people or environments that never actually existed. Synthetic personalities accumulate followers online. Artificial conversations become emotionally meaningful. In some cases, the simulation no longer imitates reality. It becomes psychologically operative reality itself.
This is why AI feels like such an important moment in the evolution of the hyperreal. The representation is no longer merely competing with reality. It is becoming increasingly capable of replacing parts of human experience entirely.
That is what makes the situation psychologically unsettling.
Yeah, this whole thing is just some dead French postmodernist’s philosophical concept, but doesn’t it sound pretty accurate?
Moreover, the problem is not that these “hyperreal” systems are necessarily “bad.” In many ways they may feel smoother, faster and emotionally easier than ordinary human interaction. In the same way social media reduced friction in communication, AI increasingly reduces friction in emotional and cognitive life itself.
But friction is not always meaningless. Human relationships are often slow, imperfect and emotionally complicated. Conversations fail. People misunderstand each other. Intimacy involves uncertainty and vulnerability. Real life is inefficient.
Technological systems increasingly promise a version of existence where many of these uncomfortable aspects can be optimized away. And perhaps that transformation is inevitable to some extent. Human consciousness has always changed alongside technology.
But as I mentioned earlier, I think something psychologically strange happens when more and more layers of human experience become mediated through systems designed around efficiency, optimization and artificial responsiveness.
We are not simply “losing our humanity,” whatever that would even mean. We are changing. Our perception, attention, emotional life and relationship to reality itself are gradually shifting alongside the technologies we build.
And honestly, I think many people already feel this process happening internally, even if they struggle to describe it precisely.
The pressure to optimize and mediate more and more aspects of life will probably continue intensifying whether we want it or not. And realistically, most people – including myself – will continue participating in that world because avoiding it entirely is becoming increasingly impossible.
So what boundaries should we set? I have no idea. But I believe we’ll end up creating some, whether they serve a purpose or not. For example, I haven’t used orthosomnic devices to measure bodily functions in a long time. Just like how someone goes on an Instagram detox and hasn’t used it for a few weeks or months. Or like how someone decides to take a big step and starts living in isolation without internet access. In my opinion, these and many other things will happen, and they’ll be more common and popular than ever before.
Maybe that strange feeling of exhaustion, disembodiment, and unreality that many people experience online is not simply anxiety or nostalgia, but an early psychological reaction to living in environments our nervous systems were never really designed for.
Perhaps we’ll adapt and change so quickly that we won’t need to build any boundaries. Perhaps a global catastrophe will occur, such as a pandemic or World War III and everything will be irrelevant. And maybe boundaries are important. Who knows. Unless something extraordinary happens, change is inevitable, and it is only a matter of time before it affects us and whether it will affect us all. And I’m glad I’m not the only one who realizes this.
