Information bubbles are now one of the core elements of the digital environment: platforms personalize feeds, search results, and recommendations to keep users inside their comfortable version of reality for as long as possible. The result is that people living on the same street — or even in the same building — increasingly receive fundamentally different accounts of what is happening in the world. Today we examine why this problem will only get worse.
According to the Reuters Institute, in 2024, 59% of respondents across 47 countries said they were concerned about their inability to distinguish truth from falsehood in online news. At the same time, the World Economic Forum’s Global Risks Report 2025 placed misinformation and disinformation among the leading short-term global threats for the second consecutive year, linking them to eroding institutional trust and deepening social polarization.
The situation has grown considerably more complicated with the rise of generative AI. Where producing propaganda content once required large editorial teams, significant budgets, time, and distribution infrastructure, the barrier to entry has collapsed. In 2024, the OECD noted that disinformation undermines trust in institutions and public policy, and that new digital tools are making information attacks cheaper, faster, and more scalable.
This very shift — from old-style information bubbles to their new, AI-accelerated version — is what we explored with Azer Aliyev, founder of ClickClickPlay.com, an affordable hyperlocal marketing service. He previously worked at Google, where he was responsible for strategy development and execution in Account Security of YouTube Detection & Mitigation. We talked about how platform logic is changing, why personalized reality is growing increasingly dense, and what happens to society when technology learns not just to curate but to construct a separate version of the world for each of us.

2Digital: People living in neighboring apartments increasingly inhabit entirely different parallel universes — ones built by social networks and messaging apps. What exactly is an information bubble today?
Azer: An information bubble is essentially a semi-closed information room that contains only what you like and what confirms your opinion of yourself and your view of the world. You end up in that room either on your own, because it interests you and you choose to stay, or you’re invited in and persuaded to remain.
This mechanism has been refined for a long time, at least since the first religions appeared.
You exist inside a certain information vacuum, shaped by the environment around you: this is the right way to act, this information is correct, these postulates are true. And the entire infrastructure is built to keep you inside that information ecosystem, with as little access as possible to any other reality.
Not much has changed since. What has changed is the technological approach and the granularity of perception.
Where we used to talk about this at the level of a village, a city, a neighborhood — some geographic location — it has now come down to the level of a single family, or even a single individual: this is your truth, your information room. Meaning the person in the next room might inhabit a completely different world.
2Digital: Is the phenomenon of information bubbles seriously discussed inside Google? Does it concern the specialists who work there?
Azer: I can’t speak for all of Google, because Google operates like a federated republic — each individual team digs into its own set of problems to solve. But do they see the issue? Yes, absolutely.
If an information bubble develops on its own, if people simply believe in something, if they hold clear religious or political convictions — that’s natural. Every society has its rules, its cultural influences. The problem arises when an external actor steps in to create that information vacuum or information room for you, engineering it so that ordinary people start supporting something deeply harmful. The classic example is propaganda. That’s when certain protective mechanisms kick in.

What exactly those mechanisms are and how they work, Google doesn’t disclose — because revealing that would undermine their effectiveness.
2Digital: Who drives the bubble more — the platform’s algorithms or the user themselves? Are we being locked in, or do we willingly carry in the sofa and the fridge?
Azer: We’re driven in by stock markets. Behind every major company — YouTube, Meta, and so on — are investors demanding endless share price growth or dividend payouts.
If you’re a company, you essentially have four ways to deliver that. First, attract new users. Second, keep them on the platform as long as possible. Third, increase the volume of ads they see, since advertising is the primary revenue source. And fourth, raise ad prices.
When a platform is small, growth comes easily from acquiring new users. But as it scales, the focus increasingly shifts to keeping people on the platform and getting them to consume as much content as possible.
This isn’t just YouTube or Facebook — Netflix is the same story. They’re hunting for your attention. That’s what advertisers pay for.
You can raise ad prices, but not indefinitely: the market isn’t elastic, budgets are finite. Ad volume is also growing — visibly, across every platform, more of it every year.
So how do you engage users even more deeply, so they interact more, consume more content, or create more of it? You build algorithms that serve people the videos they find interesting. That’s logical enough.
And what interests us most? What agrees with our existing opinion, or reflects it back at us, saying: you’re smart, you’ve got it right. Or, in the worst case: you’re smart — and they’re the bad guys.
The algorithm doesn’t label “the bad guys” on its own. It simply notices that this kind of content interests you, or people like you. Interesting? Then we’ll show you more. And more. And more.
2Digital: And of course, no one is going to explain in detail exactly what algorithms shape this information bubble?
Azer: Naturally. It’s hundreds of parameters — a highly complex and constantly evolving system. Obviously, basic parameters are factored in: language, age, gender, and so on. But that’s just the tip of the iceberg.

The system tries to account for every possible pattern in your behavior: when you watch, what you watch, what interests you, what might interest you, and what’s happening around you. The history of what you watched six months ago, five years ago, last month. How you’ve changed, how your age shifts, your preferences, your social status, and so on. Local trends, global trends, the time you spent watching any given video and much more.
2Digital: How has the information bubble situation changed since the arrival of generative AI?
Azer: Let’s start with the fact that AI has slashed the cost of producing any kind of content by an order of magnitude. Where making a video used to take a week, ten days, a month — resources, budget, and talent — now two or three prompts and fifteen minutes can get you a first result.
This has already led to the generation of enormous volumes of content that, in bulk, holds little real value — because it’s low quality.
And it also makes propaganda dramatically cheaper to produce. Where a propaganda message used to be crafted as a single piece for a broad audience — because generating and distributing it was difficult — today the technology allows thousands of tailored messages to be created for thousands of distinct sub-cohorts of people. Which makes it possible to persuade each of them with ever-greater precision.
At its core, AI is simply a tool that accelerates many processes by an order of magnitude, and we can’t yet fully grasp where that leads. But it’s clear that if someone needs to shift the narrative or public opinion on a local or global issue, doing so has become easier and faster.
2Digital: So what’s actually happening in reality diverges further and further from what we see inside the information bubble.
Azer: Yes — and because of the sheer volume of content people consume, they see less and less of the real world. It gets replaced, essentially, by the Instagram picture, and we come to mistake that for reality. This applies to everyone, from teenagers to pensioners. We see an artificial image being served to us, and we increasingly believe that the world around us is binary: some people are “ours,” and they’re inherently good; everyone else is “other,” and they’re all bad. There are widely accepted — but in fact imposed — standards of beauty, success, and consumption.
Reality is far more colorful than that. And this gives an extraordinary amount of power over ordinary society to a vast number of tech and marketing companies. Though I think this “reality” will hold only until the first major crisis. In a crisis, consumption patterns shift sharply. Sooner or later the pendulum swings back. A moment will come when old narratives break down, old containment institutions fail, and new ones haven’t been built yet. That’s a period of chaos — and of freedom, including, to some degree, freedom from information bubbles.

2Digital: Should we actually welcome that kind of freedom? Most of us — even while complaining about artificially constructed information bubbles — want to live in a predictable, regulated, and safe environment.
Azer: More than that — we’re already living in one, at least at the YouTube level. A simple example: when did you last see child pornography on YouTube? It isn’t there — not because no one uploads it, but because YouTube removes it before a single person can see it. And uploads of that material are constant; YouTube works to scrub it and cooperates with law enforcement to stop it. But still.
Should governments try to make the internet safer? Absolutely yes — I fully support that. The problem is different: when the state starts intervening too heavily, then who is checking the state to ensure it’s acting in citizens’ interests and not someone else’s?
And if the state claims it can guarantee complete safety, why is there still street crime in every country? What gives anyone confidence that the state can make the internet safe without suppressing freedom of dissent? My fundamental problem is that we cannot make the entire internet completely safe. We should strive toward it. But whether we can do it without serious overreach — I doubt it.
2Digital: In that case, it seems we’ll have to choose our priorities. Either overreach in the first stage. Or that same life-giving chaos.
Azer: Let’s not divide things into black and white again. There will be chaos regardless — if only in small doses. There are always people who don’t like rules. But I’m not confident we can manage this adequately, without significant deliberate human cost. So yes, I agree we should strive to make the internet safer for everyone. I just doubt that the people responsible for doing so will get it right.
2Digital: We’ve covered quite a few stories about channels being shut down for AI-generated content following complaints. But it’s clear the platforms do it reluctantly — because commercially, those channels generate revenue for both their owners and the platforms themselves. There’s a conflict of interest. In your view, what’s the fate of all this AI content mess over the next five to ten years?
Azer: It’s hard to fight, because content attracts people — especially when it’s well made, even if it’s AI. People watch the ads, companies make more money. But over the long term, people get tired of AI content and simply stop using the platforms. I had a similar experience with Facebook. I realized it had just stopped giving me what I actually wanted: real posts from my friends, people close to me, people I follow.
It’s the same with AI: I want to see content from people — ideally people I know, respect, whose opinions interest me, even when I disagree with them. I think YouTube is making some of the right moves right now. As far as I recall, they announced that pure AI-generated content will be demonetized. If you run an automated channel and everything is AI, you won’t get paid for it.
As a result, for the large number of people publishing AI content purely for money, it will eventually become economically unviable. Some will drop off, people will gravitate back toward real creators who invest serious time and resources. But AI content will remain — just as a supplement to the core. Like masks in Photoshop, or an overlay on top of something real.
2Digital: Could we reach a point where content producers simply mimic real human creators, while it’s still AI-generated content underneath?
Azer: Absolutely. And then the issue becomes how well YouTube and other platforms can detect that it’s AI content. Which is very hard. Given the pace of development, AI in the hands of those trying to game the system will advance faster than AI in the hands of those defending it. That’s the standard dynamic. So you end up having to build multi-layered defenses that minimize the damage. YouTube has a lot of those tools, and I know they’ll keep developing them. Hopefully other platforms will too.
2Digital: Or alternatively — AI content simply reaches a level where it satisfies us, and the problem dissolves on its own.
Azer: Exactly. Imagine, for instance, YouTube one day generating films using AI tailored to your personal preferences. Cartoons, science fiction, and much more will sooner or later be generated well by AI. You’ll be able to get a series made just for you, with your own characters, and decide what happens to them next. Or swap out the lead actor. Who would say no to that?

2Digital: YouTube’s fight against AI slop in the entertainment space has mixed results. Things get considerably more complicated when we’re talking about propaganda.
Azer: It’s worth remembering that influence over human minds was happening long before AI — so the causal link isn’t quite that direct. Artificial intelligence certainly assists in that dark work. But the problem runs much deeper. It’s not about AI slop; it’s about the fact that people are more willing to believe a convenient truth, that they let others take responsibility for them when hard decisions arise, that they depend on the authority of media personalities, and so on. That’s why information warfare will only intensify. Where once wars were fought through proxy forces, now they’ll be fought through proxy content. And unfortunately, there is no good solution to that problem.
2Digital: What about a bad one?
Azer: Ban everyone and build a sovereign internet. It exists in one form or another practically everywhere, because every country has its own laws. In Malaysia and Indonesia, for instance — if I’m not mistaken — you can’t run online casino ads or videos on digital platforms. That too is a form of sovereignty: we don’t need to show our citizens that. Or take Turkey, where Telegram channels with 18+ content are banned — you simply can’t access them from within the country.
Whether that’s good or bad — let everyone decide for themselves. But in some cases, I think we’d agree it’s a good thing: no online casinos, no certain categories of harmful content. In other cases, not so much. And that’s already deeply tied to geopolitics.
2Digital: Is there a risk that AI assistants will become super-editors of reality — where the user no longer chooses between sources, but receives a single, pre-assembled personal picture of the world?
Azer: I think it will happen even sooner than it seems. Ten or fifteen years ago, if you wanted to find something, you’d open any search engine and get twenty or thirty websites ranked by citation count and authority. You could at least compare what different sources were saying. Then, with little fanfare, censorship began and vast swaths of content started disappearing from those same results.
Most of the content on the internet, until recently, was written by people — even if from very different perspectives. But now the situation is changing fast. According to an Ahrefs study, as early as spring 2025, 74.2% of new web pages in their sample contained AI-generated content. And that trend will most likely only accelerate. Soon the question won’t be which content Perplexity selected for you — it will be that someone else pre-generated the entire body of text from which the AI then assembles your picture of the world.
We are losing alternative points of view.
2Digital: How does modern marketing amplify information bubbles? After all, advertising now sells not just a product, but a mindset, an identity, a set of values.
Azer: It’s always been that way, honestly. AI just allows it to happen faster, more efficiently, and at greater cost. I don’t think humanity will ultimately get dumber — it will more likely become increasingly indifferent to many things. And most of the world’s problems happen precisely because people are indifferent to what’s going on. It’s a tragedy, but at the same time a natural survival mechanism.
2Digital: What can a person actually do to not necessarily escape the bubble entirely, but at least make it a little wider? What three to five habits would you name?
Azer: It will sound obvious — but try to get information from real people. Very soon, that will be a valuable and probably even rare habit.
You can simply watch television from another country, while it still exists in the form we’re used to — sometimes you’ll be impressed, sometimes horrified. But at least you’ll see what other people see, what they consume, and why they think the way they do. Accepting it may be very difficult, but you’ll understand them better.
And the most interesting thing — look at the crossfire points of information. When it comes to the current situation in Iran, it’s extremely useful to look not only at what Iranians, Americans, or Israelis are saying, but also at what people living in Dubai, Oman, Kuwait, or Qatar are saying — those who are literally between these countries. Because they see both sides.
Having multiple accounts for browsing is also a good idea. Even better — accessing other platforms through a VPN from a different country and in incognito mode, to see what an average ordinary person there actually sees.
The more complex option — keeping several devices that don’t cross-reference each other. Anything that helps fragment your identity for the systems that are constructing reality for you.

