
Seven weeks in X’s (formerly Twitter) algorithmic feed — and American users’ political views irreversibly shifted to the right: priorities shifted toward inflation, immigration, and crime; attitudes toward the cases against Donald Trump softened; and attitudes on the war in Ukraine moved closer to pro-Kremlin ones. Turning the algorithm off reversed almost none of it. These are the results of an independent study conducted in 2023. One of its authors, Paris School of Economics professor Ekaterina Zhuravskaya, who was recently placed by the Russian Ministry of Justice on its list of “foreign agents,” explained in an interview with T-invariant why the algorithm proved to be such a powerful tool for shaping opinions.
Echo Chambers and the Algorithm’s Ratchet
Social networks have ceased to be just technical tools for communication or sources of entertainment content — they have become a powerful channel for influencing public opinion, political preferences, and electoral behavior. A substantial and steadily growing share of the world’s population gets its political news from social networks. If in the 20th century public consciousness was shaped primarily by newspapers, radio, and television, in the 21st century social networks have assumed that role. The information a given user sees — its amount and emotional framing — is determined by the platform’s feed personalization algorithms. Yet the selection mechanisms remain extremely opaque, and their long-term political consequences have not yet been studied thoroughly.
Researchers, regulators, and civil society are seriously concerned: today machines decide which specific news stories we read. The concern stems from well-established theoretical hypotheses that algorithmic personalization of news feeds in social networks can create echo chambers, information bubbles, algorithmic polarization, and intentional disinformation. A central concern is whether social networks — contrary to their original democratic potential — are turning into a system for the systematic distortion of the information environment favoring specific ideologies, political actors, or even external actors. Bot and troll farms and coordinated campaigns to “boost” emotional content have been used for years to create a false impression of majority sentiment. Understanding how modern news-feed algorithms work and how vulnerable they are to such manipulation is becoming increasingly important for assessing threats to democratic institutions.
Latest updates on scientists’ work and experiences during the war, along with videos and infographics — follow the T-invariant Telegram channel to stay updated.
Seen in this context, the work by Ekaterina Zhuravskaya and her colleagues from Italy, Switzerland, and France, The political effects of X’s feed algorithm, published in Nature, is highly significant. It is a large-scale independent field experiment that for the first time robustly measured the causal impact of the platform X’s (formerly Twitter) feed-ranking algorithm on the political views of real users.
The experiment took place in the summer of 2023 — six months after Elon Musk acquired the platform and roughly a year before his public endorsement of Donald Trump in the 2024 presidential election. A key point: the study was conducted independently of the company X — without its involvement in the design, data collection, or analysis. This was possible because on this social network users could choose how their feed was curated: via the algorithm (For You tab) or chronologically (Following tab).
Several thousand U.S. users took part. Three-quarters originally used the algorithmic feed order, while one-quarter used the chronological feed. The researchers randomly assigned each of these users a new feed-formation order for seven weeks: either algorithmic or chronological. This made it possible to answer two questions: how enabling the algorithm affects users who had not previously used it, and how disabling it affects those who had. To answer the first question, it was enough to compare those who had always used the chronological feed with those who had originally been on the chronological feed but were switched to the algorithmic one during the experiment. To answer the second question, it was necessary to compare those who remained on the algorithmic feed with those who were switched to the chronological feed.
Some participants installed a custom browser extension that recorded the real feed content. This later allowed the researchers to classify posts by ideological orientation.
The results proved to be unexpectedly asymmetric:
- Switching to the algorithmic feed significantly increased the share of conservative (right-wing) political content — even in the feeds of users who initially leaned Democratic.
- Turning the algorithm on influenced the political preferences of those who had not previously used it. The algorithm shifted political attitudes in a conservative direction on several specific issues: policy priorities (inflation, immigration, and crime became more salient to users, while education, healthcare, and climate became less so), evaluations of the criminal cases against Donald Trump (the number of users who considered them inappropriate increased), and attitudes toward the war in Ukraine (shifting toward pro-Kremlin positions).
- The effect was most pronounced among Republicans and independents; among strong Democrats there was no significant shift in views, despite increased exposure to right-wing content.
- Most importantly: this shift was irreversible — political views did not revert when the algorithm was disabled.
- To explain the asymmetry of the effect, the researchers analyzed user behavior — specifically, which accounts they followed. It turned out that users who had previously used the chronological feed and were switched to the algorithm during the experiment began following conservative political activists and continued to see their content even after returning to chronological order. The authors refer to this as the ratchet effect.
- At the same time, neither turning the algorithm on nor off significantly affected basic party identification or the level of affective polarization (emotional hostility between the camps).
- Content analysis showed that the X algorithm sharply reduced (by ~58%) the presence of legacy media in users’ feeds while simultaneously increasing the share of posts from political activists (predominantly right-wing) and emotionally charged content. This replacement of sources — from institutional journalism to opinion leaders and activists — is considered one of the central mechanisms behind the observed shift.
Notably, prior to this study an experiment was conducted in 2020 that turned off the algorithmic feed on Meta platforms (Facebook and Instagram). That experiment found no significant effect on political views from disabling the algorithm. Many interpreted this result as evidence that the algorithm does not influence political views. However, Zhuravskaya and her colleagues’ study shows that it is difficult to determine the impact of an algorithm by examining only its deactivation — because that impact can be long-term. Notably, the null results from turning the algorithm off on both X and Meta are consistent — despite differences in the information environment and the priorities built into the algorithms.
T-INVARIANT BACKGROUND
Ekaterina Zhuravskaya is a prominent economist specializing in political economy, media, disinformation, and the behavioral effects of digital platforms. She is a professor at the Paris School of Economics and École des Hautes Études en Sciences Sociales. For many years she has studied how information technologies influence political behavior and democratic processes. Zhuravskaya rarely agrees to interviews, but in this case she agreed to speak, as she considered the topic too important to remain solely within academic circles.
Advantages of Independence
T-invariant: What led you to study the X feed algorithm in 2023? Was it linked to Elon Musk’s acquisition of the platform or with the public debate about its changes? Or perhaps with an interest in why previous studies (for example, on Meta platforms) failed to detect political effects that seem obvious?
Ekaterina Zhuravskaya: We were interested in a question that had not been adequately studied: whether recommendation algorithms can not only change what a person sees in the feed but also influence their political views. Previous large-scale studies, in particular on Meta platforms, found no effects from turning off the algorithm. It was important for us to understand whether this means that algorithms broadly are politically neutral, or whether the effect is long-term and cannot be detected by examining only the deactivation of the algorithm. X was especially interesting because in 2023 it was possible to make a relatively clean comparison between the chronological and algorithmic feeds, and the platform itself was already playing a noticeable role in political discourse.
T-i: Your experiment is one of the few independent (conducted without platform cooperation) field studies of the impact of news-feed algorithms on political attitudes. To what extent does this independence limit researchers’ capabilities? How much can platforms hide from an outside observer?
E.Z.: Financially — quite substantially. We had to pay participants to follow the feed-formation mode assigned by the experiment. In addition, an external researcher does not know the precise ranking parameters, does not see how the algorithm changes over time, and does not observe many of the signals on which personalization is based.
But independent research also has an important advantage: it does not depend on the good or bad will of the platform itself. One might assume that if a platform invites researchers to conduct an experiment, it is already anticipating possible outcomes and preparing to interpret them in its own favor. This raises questions about why the platform invites researchers at one particular time and not earlier. We, however, were able to see the real consequences of the algorithm’s operation “externally,” even though we did not have access to its internal workings.
T-i: Is it possible, from the outside, to assess the level of bias or ideological saturation in the user-generated content on a given platform or within its segments? Are there any known attempts to develop such metrics?
E.Z.: It is possible to measure the ideological composition of content, the share of posts from traditional media, activists, and party accounts, as well as analyze which types of messages the algorithm systematically amplifies. That is exactly what we did. But it is impossible to fully disentangle whether the algorithmic change in the feed results from the algorithm being designed to prioritize a particular political direction or whether the algorithm is politically neutral yet messages of a certain political orientation (in our case — conservative) cause users to spend more time on the platform.
T-i: You conducted the experiment in the summer of 2023 — six months after Musk purchased the platform and roughly a year before his public endorsement of Trump. To what extent do you consider the results specific to that period and that version of the algorithm, and to what extent do they reflect more general patterns in how recommendation systems operate?
«You Can Only Convince the Doubtful»
T-i: The most notable result of the study is the asymmetry: turning on the algorithm shifts views in a conservative direction (especially among Republicans and independents), while turning it off does little to bring them back. Is the “ratchet effect” mainly driven by changes in the list of subscriptions, or does it affect users’ beliefs more deeply?
E.Z.: Our data clearly show the mechanism working through changes in the users’ own behavior — above all, through changes in who they follow. Under the influence of the algorithmic feed, participants were more likely to follow conservative political activists, and after returning to the chronological feed these follows remained.
Therefore, turning off the algorithm did not necessarily return people to their starting point: their information environment had already been altered. It is in this sense that we speak of the “ratchet effect” — not an immediate impact, but a more lasting trace that the algorithm leaves through changes in the patterns of attention and subscriptions.
T-i: The algorithm significantly reduced the share of legacy media in the feed and increased the share of activists (predominantly right-wing). Do you see this replacement of sources — from institutional journalism to emotionally charged influencers — to be the main mechanism behind the observed shift? Many users with right-wing views argue that major media of a left-wing bias. If we take the two effects you noted (the rightward shift of the feed and the reduced presence of major media) — which seems more like the cause and which more like the consequence?
E.Z.: I would not treat as separate these two phenomena. The decline in the role of major media and the rightward shift occur simultaneously in reality. Determining which is more important would require a different experiment involving not the algorithms used by social networks but some controlled experimental algorithm created by the researchers. If the algorithm downranks institutional journalism and promotes more attention-grabbing activist content, it simultaneously changes the ideological composition of the feed.
T-i: Democrats in your experiment barely changed their views, despite increased exposure to conservative content. Does this suggest that recommendation algorithms on social media platforms are more effective at reinforcing users’ existing views than at changing users’ minds?
E.Z.: In social psychology and media economics it is well established that responsiveness to a message depends on the extent to which it resonates with prior beliefs. It is very difficult to convince someone of something they are convinced is false. You can mostly persuade those who are in doubt.
T-i: Can we expect that the conservative tilt among some participants will weaken after several months or a year, or does the “ratchet” nature of the changes make them practically permanent?
E.Z.: Our experiment was not designed to measure very long-term effects. But within the observed time horizon we see clear persistence: after the algorithm was turned off, the changes did not reverse on their own because the users’ own behavior had already changed — primarily their subscriptions. We cannot claim how long the effect lasts, but we can confidently say that it is not limited to short-term mechanical impact.
The Nature of the Bias
T-i: In your paper you link the observed rightward bias of the algorithm to the “preferences of the platform’s owner” and the characteristics of the information environment he created. What specific data from the experiment most convincingly argue against the claim that this is simply a natural consequence of the popularity of conservative activist content among X users?
E.Z.: As I have already said, we cannot separate these two effects in our experiment. However, there are indirect indications that right-wing content really is more engaging and keeps users on the platform longer. For example, posts from left-wing activists are also prioritized by the algorithm, yet this does not lead users to start following them.
T-i: You note that as early as 2016, long before Elon Musk, the Twitter algorithm already favored right-wing content. Yet in the chronological feeds of participants in your experiment, liberal content predominated from the outset. How do you explain this? Doesn’t it suggest that your sample was skewed to the left compared to the average X audience, and that the algorithm was therefore imposing more right-wing content on it in an effort to bring it into balance with its own environment?
E.Z.: The algorithm favors right-wing content relative to the chronological feed, but this does not mean that most of the content is right-wing. It means that there is more of it in the algorithmic feed. There is no contradiction here, and the sample is not skewed — as we demonstrate by comparing it with the American National Election Studies.
T-i: If we assume that the X audience has historically and persistently leaned right (and left-leaning users simply feel unwelcome there and leave more often), then switching to the algorithmic feed will naturally amplify right-wing content — because it receives more engagement. In that case, can we speak of deliberate intervention by the owners?
Latest videos on science during wartime, interviews, podcasts, and streams with prominent scientists — subscribe to the T-invariant YouTube channel!
E.Z.: X’s audience is now more right-leaning than it was before Elon Musk’s acquisition, yet there are still many Democrats among its users. In any case, the algorithm personalizes the feed for each user. And we examine the effects for both Democrats and Republicans. We do find results for Republicans and independents and do not find such effects for Democrats, but this is not the result of sample composition bias; it is the result of the algorithm’s behavior. Our study does not allow us to clearly distinguish deliberate intervention from the maximization of engagement. However, this does not mean we cannot draw the conclusion that the algorithm has political influence.
T-i: Where do you draw the line between the “preferences of the platform’s owner” and the “natural reflection of the preferences of the majority of active users”? Should social network owners actively maintain ideological balance, even if it goes against the mechanics of engagement?
E.Z.: Drawing this line is quite difficult, because in reality several factors interact: the structure of the user network, the logic of engagement, and the design of the algorithm itself. But for me an key distinction arises where the algorithm stops being just a mirror of user activity and starts systematically amplifying certain types of content over others. In our study we see that the X algorithm reduces the share of posts from traditional media and increases the share of activist content, especially conservative ones. This is no longer simply a reflection of demand, but the result of the platform’s ranking architecture. At the same time, I do not think the answer necessarily lies in manually enforcing ideological balance. Far more important is the transparency of ranking criteria, the possibility of external audit, and genuine user choice of feed mode. The algorithm downranks all traditional media — both liberal and conservative. This leaves users in an information space where everything is reduced to emotions rather than facts. Here I do not mean ideological balance, but users’ ability to receive verified information.
T-i: It seems very important to distinguish the influence of owners (shareholders) from the influence of the social network’s user base itself (stakeholders). Does your study allow us to separate the two? Is it possible to design a study that would leave no doubt on this question?
E.Z.: Our study does not allow us to reliably distinguish deliberate intervention by the owner from algorithmic amplification of content that better retains attention in an already established environment.
This is a key limitation. But it does not undermine the main conclusion: the very method of ranking on the platform has political consequences. We show not merely that there is a lot of right-wing content on X, but that switching to the algorithmic feed causally increases exposure to conservative content and shifts a number of political attitudes in a conservative direction. The question of intent requires different kinds of data — for example, internal logs of algorithm updates or access to the platform’s management decisions.
T-i: In discussions people often talk about bots and trolls. Coordinated online campaigns use emotional content and engagement algorithms to distort people’s picture of the world (we recently reported on such a campaign in Burkina Faso that led to the disruption of an anti-malaria project). Could the rise in right-wing (and pro-Kremlin) sentiments be not so much a response to real social problems as the result of sustained, deliberate manipulation of public opinion through social networks?
E.Z.: Of course propaganda has influence. However, as we have already discussed, it is especially effective when it exploits real social problems for its own purposes.
AI: Risks or a Tool for Control?
T-i: If one ideology begins to clearly dominate, it gains an edge both in attracting new users and in getting boosted by the recommendation algorithm, thereby suppressing its opponents even further (up to and including canceling). Does a natural self-correcting mechanism for such a process exist, or, without external intervention (by regulators, changes to the algorithm, or user migration), can it go as far as almost completely displacing alternative views?
E.Z.:On social media? everyone can find an echo chamber where no one will cancel them. Unlike traditional information spaces, in social networks such a mechanism is far weaker — simply because users can always find like-minded people. It is no coincidence that, as we show in the study, Democrats’ feeds contain far more left-wing content than Republicans’ feeds. The fact that the algorithm shifts the feed to the right does not mean it turns the feed right-wing; it simply makes it slightly left-wing.
T-i: How justified are comparisons between manipulations through social networks and the centralized propaganda used by the Bolsheviks, the Nazis, or the Putin regime?
E.Z.: Such comparisons are justified, but only with major caveats. Traditional propaganda was centralized and uniform, while the platform environment is personalized and superficially decentralized. In this sense modern platforms are even more powerful in some respects: they do not simply repeat the same message but tailor it to each user and amplify it via feedback loops. But this is not the same phenomenon, and historical analogies are useful only when they do not obscure the differences.
T-i: What threats to democratic institutions arising from the area you study seem most serious to you?
E.Z.: I would highlight three threats.
First, the systematic distortion of the of the worldview — not necessarily through outright lies, but through selections of topics, sources, and priorities. Second, the entrenchment of this distortion through changes in habits and attention networks, which is precisely what we see in the subscription mechanism. And third, the ability to influence particular political views without changing a person’s party identity. Our results are especially vivid here: party identification does not change, but views on important issues do.
T-i: How do you assess the role of AI in future versions of algorithms? Will it make manipulations of public opinion even subtler and less noticeable, or, on the contrary, will it allow rapid detection of such manipulations and provide warnings about attempts to “hack” your worldview and beliefs?
E.Z.: AI will almost certainly make such systems more adaptive, more precise, and therefore potentially more influential. This applies both to quite useful personalization and to politically sensitive content. But the same technologies can also be used for oversight — for example, to detect coordinated influence campaigns, label synthetic content, or identify anomalies in recommendations. Therefore AI is not only a source of new risks but also a possible tool for control.
T-i: Until 1987, the United States had a “Fairness Doctrine” for radio and television: because of the limited number of frequencies, broadcasters were required to present all major points of view in a balanced manner. Although today anyone can create a website or account, real access to a mass audience is concentrated among a few platforms — a de facto monopoly similar to the old broadcasters. Is there reason to think that without some regulatory analogue of the Fairness Doctrine (an obligation to balance opinions or ensure algorithmic transparency) major platforms risk sliding toward Radio Télévision Libre des Mille Collines — when a dominant agenda suppresses all other voices?
E.Z.: I would be very cautious about directly applying this logic to modern platforms. Forcing an algorithm to “symmetrically” promote all positions is both technically difficult and normatively risky. But this does not mean that regulation is unnecessary. In my view, more realistic and important are requirements for transparency, researcher access to data, independent audits, clear choices for users of feed mode, and transparency of basic ranking principles. Platform responsibility should start not with imposing ideological parity, but with accountability for the attention architecture.