04/10/2026 | News release | Distributed by Public on 04/09/2026 21:29
According to research, people spend almost all of their physical time online on social media. Here, algorithms individually select user's own content for each person. Together with Alexey Savelyev, associate professor of the Department of Information Technology at Tomsk Polytechnic University, we are looking into how the algorithmic feed works and whether it is possible to "train" social media to show only interesting content.
Social media background
Recommendation systems, smart feeds, and selection methods are responsible for content selection. Operation of such algorithms comes down to the mechanics of collecting and analyzing user activity data. Their main task is to keep user's attention, maintain or increase user's loyalty to the resource.
"This recommendation system has developed for several reasons. First, there is no mass influx of new users, as it was before. The annual increase is about 3% of the total global population, which is not much. Second, the amount of content is growing at an unprecedented pace. Existing users cannot consume more content than they already do. This created a situation of fierce competition for consumer attention. Today, every Internet resource competes with everyone. This is also why we see a large number of mergers and acquisitions by content producers," the expert notes.
Such a struggle for users, according to Alexey Savelyev, has affected the content itself and the ways it is presented. Modern consumption patterns encourage creation of short, vivid, and emotionally charged content that can easily go viral. This creates permanent bulk information where emotionally provocative content gains advantage.
You don't choose the content, but it chooses you
All social media content selection systems operate on several similar principles.
Every action we take online is a fixed event. A typical user session consists of hundreds and thousands of such events: We click reaction buttons, slow down scrolling speed at a friend's photo, watch videos, and much more. All this data is aggregated in distributed systems. Such telemetry helps to track user preferences,
- notes the lecturer.
Another way is the so-called feature engineering. This is the process of converting an array of raw data into features, that is, numerical characteristics that are understandable to the model. The source data can be information, for example, about the CTR (Click-Through Rate), the vector representation of user interests, the time and duration of the last network activity, the type of device, the speed of the Internet at that moment, and much more.
"The source data in any recommendation system is transformed into so-called one-hot vectors, where only one element is 1 and the rest are 0. For example, we have a categorical value - the name of the city of residence of a particular user (Tomsk). Other users live in Moscow, Novosibirsk and Yekaterinburg. You can't use this data as it is, it's just a set of characters for the model. You can't just give numbers to cities either, because the model will misunderstand them. Therefore, the data is converted into a vector, for example, "0,0,0,1" for Tomsk, which will allow us to detect useful dependencies in the future. There are exceptions, and a number of models (CatBoost) can understand the values as they are," notes Alexey Savelyev.
An array of one-hot vectors describing one of the user's features is transformed into a dense vector of fixed dimension - embedding. Many of these vectors are combined, which allows the model to automatically detect significant features.
Before getting into the feed, all content goes through two ranking stages. First, there is a selection of content candidates for the recommendation, which can be performed on the basis of collaborative filtering. The task of this stage is to select a variety of topics that are most likely to be interesting to the user. The next stage is the final ranking. Here, each piece of content is assigned an accurate estimate of the probability of interest. As a result of this complex work, the content rank is compiled, reflecting how suitable it is for a particular user.
"Content from your area of interest or interesting to your close social environment will usually be offered first. However, it is important to understand that the main function of any selection system is to increase attention.
The algorithm doesn't optimize the feed so that we like it, it optimizes it so that you can't tear yourself away. If your attention is better provided by negative and provocative content, then it will be offered regardless of its impact on your emotional state.
This is the economic model of the modern content market," the expert notes.
Is it possible to cheat the algorithms?
In short, no. According to the expert, mindlessly clicking on everything and choosing random reactions will not critically help, as it will be recognized by algorithms as abnormal behavior. Modern analysis models are resistant to such failures due to filtering systems and assigning large weights to implicit signals.
The only way to influence algorithms is to send them explicit behavioral signals (recommend, promote, and leave positive reactions to the content of interest) and directly point out inappropriate content (block specific tags, leave negative reactions). This way, recommendations can be focused on a narrow topic of interest, if the algorithm decides that this will keep the user.
The downside of such digital behavior, according to the TPU lecturer, is the risk of falling into a "filter bubble" and a certain information isolation.
You can really influence the algorithmic feed only by changing your content consumption model and habits. Make no mistake: if you dislike or negatively react to a video that you've watched to the end, it's more likely to provoke the appearance of such content in your feed rather than exclude it. Yes, today there are initiatives for the ethical formation of user recommendations, where the priority of interests is given to the user rather than the platform holder. However, you should not rely on them entirely,- Alexey Savelyev notes.
The expert advises to form healthy consumption habits: strive to consciously control attention, force yourself to "avoid" unwanted content, even if it drags you down, and regularly review subscriptions to indicate loss of interest in a particular type of content.