Over the past decade, artificial intelligence has moved from science‑fiction ideas to applied technology. And already today, algorithms accompany us everywhere: in smartphones, cars, online stores and even offline retail.
Behind all these developments stands a vast array of scientists, engineers, developers and managers who, step by step, push humanity toward new technological horizons.
In an exclusive interview for ForkLog AI, the head of the Data Science team at Tango.me, Daniil Shvets, explained what the profession of a data scientist actually entails, why algorithms are biased, how to combat deepfakes and why OpenAI sells its developments.
ForkLog AI: Good day, Daniil. Your first degree was in economics, and you even worked in that field. Why switch to IT?
Daniil Shvets: It was more of a coincidence. When I graduated in business management I realised that being a manager is fine, but you need to be able to manage something concrete in a specific domain.
With a strong mathematical background, I decided to return to something more technical. And this was the moment when data science was on the rise. Besides, I had spent some time trying to teach myself, build my own projects.
I’ve always been interested in ways to combine business and technology. I still believe that data science is the best place to find both business‑oriented domains and technical tools.
ForkLog AI: What difficulties did you encounter during the switch?
Daniil Shvets: The difficulties arose because data science is not only mathematics and algorithms, but also computer science, software development and more. I ran into problems specifically with the technical part.
For example, writing an algorithm was not the problem. But to run it required skills more related to data engineering than data science, and that was challenging.
And when I began studying data science I realised I was seriously lacking skills in computer science. So, when people ask what you need to know before starting training, I advise at least to cover these basics.
ForkLog AI: You grew up in Moscow but moved abroad long ago. How would you assess the IT industry in the CIS overall and AI in particular?
Daniil Shvets: In Russia and the CIS, since the Soviet era, strong scientific and pedagogical foundations remain. There are many professionals who understand mathematics, which is an integral part of AI and programming. In other countries people have problems with this.
That is, in terms of personnel, talent and potential — these are probably the strongest countries I know.
However, the problem in Russia and the CIS is that meaningful AI is developed under the umbrella of large companies, such as Sber or Yandex.
As I see it, in Russia it is hard to create a startup from scratch that is not tied to a big company or government structures. In Israel, London, America — a person with an education and an idea can find investors one way or another. But in Russia, it seems the process of finding funding and autonomous startup development is lacking.
Because of this, many talented people either move to Yandex and Sber, which are excellent companies with genuinely strong capabilities, or leave the country. Opportunities to create startups in Russia and perhaps in the CIS are fewer than in some other countries.
ForkLog AI: In that case, should the state help develop the industry?
Daniil Shvets: Of course. The IT future should be steered not only by technologies that serve direct state interests. Here I would include potential products that boost the economy, support exports and improve citizens’ lives.
In my view, the government should support the industry, and this is not limited to Russia, Ukraine or Belarus — it applies to absolutely any state and is independent of geography.
ForkLog AI: Turning to global AI trends, what prospects do you see and is the sector moving in the right direction?
Daniil Shvets: The sector is moving in a very good direction. Yes, there is a trend toward monopolisation of directions and research by large corporations. One could name half a dozen companies, such as Google, Facebook, Amazon and a few others, that have monopolised many areas of the IT industry, for example cloud technologies. If a potential competitor appears — they either buy it or suppress it. That is certainly a downside.
But on the other hand, this enables a large number of startups to access the same cloud storage and technologies of these companies, allowing them to develop without having to invest vast sums in their own infrastructure. This pertains to the distribution of power.
As for governance, there is a clear trend toward automation of various processes, including through AI. And that is wonderful. Of course, it raises ethical questions about where the line should be drawn. For example, facial recognition: should it be ubiquitous?
But overall, AI is improving and simplifying people’s lives, making them more efficient and comfortable.
ForkLog AI: And what is your view on facial recognition?
Daniil Shvets: Facial recognition is a very good and advanced technology that can simplify many processes, including locating criminals. But it must not be abused. If this technology is used for its intended purpose, not for self‑interest — I am absolutely for it. The issue is that is currently taking place in some regions of China. Social scoring, monitoring every action a person takes — in my view, that is already crossing a line.
ForkLog AI: Do you think the Chinese experience leads to digital tyranny or to a safe society?
Daniil Shvets: These things are closely linked. All technologies have both advantages and disadvantages.
On the one hand, controlling every action could reduce crime, raise detectability and keep society safe. On the other hand, people will lose personal freedoms.
Everything depends on what sacrifices a person is willing to make for peace and a degree of stability. It is hard to imagine a completely safe society using technology solely to monitor crime while preserving freedom and privacy.
ForkLog AI: And how are facial recognition practices in Israel?
Daniil Shvets: I see far fewer cameras here than in many other places. Either in Israel they’ve learned to mask them in ways others cannot, or here things are genuinely simpler.
ForkLog AI: Speaking of Israel, there have recently been several reports that the Israel Defense Forces use AI for military purposes. One of the latest cases is the deployment of a swarm of autonomous drones during recent shelling. Is AI in combat a threat to humanity or a security question?
Daniil Shvets: It depends where one draws the line: using AI for defence is, in my view, a positive thing, for example drones intercepting missiles. Using AI for attack is a more slippery matter, and I would tend to curb it regardless of justification.
ForkLog AI: Large technology firms are keen to dissociate themselves from such projects. For instance, Google employees in 2018 forced the company to stop cooperating with the U.S. Department of Defense. The same year the company drafted internal ethical norms. Do you think this is the right step toward demilitarising AI?
Daniil Shvets: Large companies find themselves in a difficult position. On one hand, they cannot abstain, because only they have the necessary knowledge and capacity for defence projects. On the other hand, corporations, as non‑state actors, are exposed to enormous reputational costs.
And now any scandal will trigger a flood of problems, boycotts and other issues that affect the company, its staff and revenue.
They, it seems to me, carry out this work. Yes, there is no evidence. And if there were — that would imply they are not doing their job well.
I suspect and suppose that large firms such as Google are, directly or through their structures, involved in strategic developments, including with the Ministry of Defence.
Naturally, they try not to publicise this, even inside the company. Because in organisations like Google it is impossible to keep secret about a major project.
ForkLog AI: Deepfakes are increasingly sophisticated. How likely is their widespread use for malicious purposes?
Daniil Shvets: In my view, it will happen. We see, across many technologies, that started in academic papers, they later become available to companies with resources on the level of Google or Facebook. Then skilled people manage to create something similar, and soon anyone can use it via an app.
For example, several years ago there was a popular app that aged a person. This was described in academic papers five to seven years ago. And gradually — it became more accessible.
The same path is being followed by deepfakes. At present, publicly available services are not yet good enough to be indistinguishable. But this process is inexorable, it cannot be reversed. There will always be companies eager to build startups or apps that produce higher‑quality deepfakes, and the technology will become more accessible, including to people with less benevolent aims.
Deepfakes, I think, pose a very serious danger. I cannot imagine a scenario in which, in a few years’ time, the quality of publicly available deepfakes will be so high that they cannot be distinguished from real and fraudsters do not exploit them.
ForkLog AI: How can this be combatted and who should lead the fight?
Daniil Shvets: There are algorithms that detect deepfakes as well. It’s like viruses and antivirus software — an eternal struggle. There are methods to distinguish real images from generated ones. Recently Adobe released a tool that lets you see whether an image was retouched in Photoshop. This provides a rough analogue: just as attackers use deepfakes, potential victims will be able to buy services that identify authentic images.
ForkLog AI: Deepfakes sit alongside ethical and moral questions. Recently there has been much research and news about algorithms biased against certain population groups. Do you think this problem is real or a temporary trend?
Daniil Shvets: Algorithms are biased because they learn from data. If we see that certain people are involved in accidents more often than others, an insurer’s algorithm will assign a higher probability of accidents to those people and raise policy costs. Is that bias? In my view, no. Because that is data.
Clearly, various factors influence what we predict. And if the decision‑making process in data science hinges on gender, nationality or skin colour — that would constitute discrimination. Yet this mirrors broader societal problems, and the topic is especially acute today.
ForkLog AI: Have you ever encountered cases in which an algorithm behaved with bias?
Daniil Shvets: Of course. It happens in almost any system where there are signals tied to gender, age, nationality, and so forth. The essence of the algorithm is that every parameter influences the result in some way. If you remove gender from the algorithm entirely — because it provides no useful separation — you will not use gender, but in almost any algorithm that uses gender, skin colour or age, these factors can influence the outcome.
ForkLog AI: Are ethics questions and the emergence of general AI closely linked? Is it necessary for humanity?
Daniil Shvets: For now AI is far from the idea of artificial general intelligence. I do not see AGI appearing in the coming years in a form capable of surpassing or replacing many of humanity’s decision‑making institutions.
ForkLog AI: Recently the laboratory DeepMind stated that reinforcement learning is sufficient for creating general AI. Do you agree with this assertion?
Daniil Shvets: Reinforcement learning can indeed achieve phenomenal results, but in very restricted domains. In checkers, chess or other games with fixed rules and a finite set of constraints, where billions of simulations can be run and learning occurs — then yes, certainly. In chess and go, in video games, these algorithms beat the strongest human players. But this is all in narrow, highly structured contexts with static rules.
In the broader sense of AGI, our world is a vast, multidimensional model that cannot be reduced to a simple mechanism with a fixed rule set. Reinforcement learning then encounters problems because, with too many variables and no concrete framework, it cannot realistically perform billions of self‑play simulations.
Thus, despite I regard reinforcement learning as among the most promising and important AI fields, I fail to see how, in the current climate, this tool could address the abstract, multidimensional problems we face in real life beyond a narrow domain.
ForkLog AI: Whom would you point to as movers and threats to the industry among public figures?
Daniil Shvets: I think the industry is currently propelled not by specific individuals but by the vast community of people working in it. Fifteen to twenty years ago there were pioneers who laid the foundations; now many top models and publications originate less from individual names than from accumulated knowledge and ongoing experimentation. So someone will eventually come up with something phenomenal.
Today there aren’t many standout names. Of course there are people to watch. It is far more likely that an relatively unknown person, who runs many experiments with both successes and failures, will produce something remarkable.
ForkLog AI: OpenAI once promised not to monetise its developments. But in September last year it signed a deal with Microsoft for exclusive use of GPT‑3, and recently the company unveiled Copilot for GitHub. Is the organisation gradually becoming commercial?
Daniil Shvets: Every company needs money. OpenAI is tied to other ventures, but it cannot live on subsidies forever. Even those organisations that do public‑benefit work, open source, etc., if they are serious companies they employ serious staff. If there are many serious staff and powerful hardware — they need to be paid for it.
Therefore the notion of building something solely for public benefit but without compensation is untenable. The organisation can obtain funds either from other companies — which raises questions — or monetise some of its own developments to sustain other efforts given to the public without charge.
ForkLog AI: Regarding Copilot, how promising are such tools? Would you use them in your work?
Daniil Shvets: I haven’t used this particular tool. I generally haven’t used coding assistants. I suspect most people who code won’t use it in the near term. Yet such tools can help beginners or those who need to solve a task but lack certain skills.
ForkLog AI: Won’t there come a time when, as these tools proliferate, beginners rely on them more and lose skills and competencies they might otherwise develop by working and learning from other sources?
Daniil Shvets: Undoubtedly. For instance, with widespread autocorrect, even skilled people may begin to make more mistakes in handwritten texts.
Similarly, I find I hardly count by head anymore. For simple tasks we now use a calculator, because it is readily available. I am sure older generations, used to life without calculators, used to calculate much better in their heads.
Whenever an improvement comes and simplifies, people lose some fundamental skills — and the same may happen with programming.
ForkLog AI: Does your past professional and entrepreneurial experience help you in your current role?
Daniil Shvets: In terms of direct skills, not really. But I would say that experience across different fields — economics, entrepreneurship, medicine or others — helps you approach any problem, including AI and data science, more comprehensively and from multiple angles.
I believe that a diverse professional background enables me to take a holistic view of a problem, understand the business objective and its meaning before solving it with data science.
ForkLog AI: Was it hard to find your first job in data science?
Daniil Shvets: Finding the first job was very hard, but I was fortunate. When I had just finished training, I won a hackathon organised by a large bank. It turned out they needed someone to experiment with reforming a system related to data science and analytics. So I thought: why not?
I consider this luck. Overall, I recognise that finding the first job is very hard. Particularly given that now one can gain strong knowledge through Coursera, DataCamp and other online courses, which has intensified competition in the job market.
ForkLog AI: What would you advise beginners in data science?
Daniil Shvets: I’d offer a simple piece of advice that many people neglect — don’t confine yourself to the technical knowledge that’s in courses and textbooks. Look at how to solve business problems. Today there are libraries that let you implement all of this in just a few lines.
In my view, a data scientist is no longer just a technically oriented role that builds algorithms. It is increasingly people who solve business problems using data science, algorithms, data, training and so on. And that is precisely what I look for in people I hire. Many of my colleagues who hire data scientists say this is what they truly lack.
To reiterate, a data scientist is not only a technical specialty. It is a person who can deliver a solution to a specific product or business problem using algorithmic methods.
ForkLog AI: You are also involved in creative work. How do you view various algorithms developed for tasks related to art?
Daniil Shvets: In the near term a human will not be replaced in the realm of creativity, at least because creativity is about creating something new, not merely copying or reworking past styles.
All genuine creative breakthroughs involve novelty. Whatever it is, all algorithms are trained on existing material. Is it possible to teach a computer to imitate a work in a given genre? Certainly, and in the future it will do this better. But can a computer today create and invent a new genre? I doubt it.
Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the AI world!
