2021 proved no less challenging for the world than the year before. The pandemic raged on, whole industries remained in remote work or lockdown, and the tech sector wrestled with a semiconductor shortage.
But crises also bring opportunities. In this piece, the ForkLog AI editorial team outlines how the AI and machine-learning landscape advanced in the year just past, and what lies ahead.
The U.S.–China tech race
The United States and China have long vied for technological supremacy, but neither has yet earned standout, lasting leadership.
In 2019, under the aegis of President Donald Trump, the U.S. Treasury imposed sanctions on Huawei and other Chinese firms, elevating the conflict to a new level. Moreover, Americans pressed Western partners to abandon the telecoms giant’s technology.
In March 2021 the Chinese government presented a five-year plan for the development of advanced technologies, including artificial intelligence. The authorities said they would raise R&D spending by more than 7% a year through to 2025.
Under Joe Biden, policy toward China has not changed. The new head of the White House continued pressing Chinese companies via sanctions, and forging new alliances with an overtly anti-Beijing tilt.
On 12 July, the U.S. Department of Commerce imposed sanctions on 14 Chinese technology companies for human-rights abuses against the Uyghur minority in Xinjiang. American firms were barred from doing business with them without a special licence. In particular, the trigger for the new restrictions was a BBC report in which a programmer-engineer described testing of emotion-recognition systems on Uyghurs in police stations.
Four months later the department introduced two more sanction packages. The first concerns about ten Chinese manufacturers of quantum-computing and AI technologies. The second blacklist included technology firms, startups and electronics-makers, including drone developers DJI.
On 10 December the U.S. Treasury imposed new sanctions on the Chinese AI startup SenseTime, blocking the sale of its shares to American investors. The company had initially planned to raise about $767m, but due to the restrictions it postponed its market debut and returned the funds to investors.
Other indirect means of contest included government initiatives and strategies aimed at developing the technology sector. On the Chinese side was the aforementioned five-year plan for the economy, as well as the phased implementation of documents adopted since 2015.
The United States took a series of measures in one year. The National Security Commission on AI published a report with recommendations for President Biden, Congress, business leaders and government on maintaining U.S. technological dominance. The document runs 756 pages and urged the federal government to raise funding for the sector to $40bn over the next five years, to build a coalition with allies, and to continue developing autonomous weapons.
In the United States a High-Level Committee was established to advise the president and other agencies on AI matters.
The Pentagon also called for a greater role for AI in defence. General‑Lieutenant Michael Groen, head of the U.S. Department of Defense’s Joint AI Center, said that the armed forces should expand the use of artificial intelligence so as not to fall behind competitors, especially China.
Yet some officials argue the race is already lost. Former Pentagon software chief Nicholas Chaillan made a sensational claim, which the Department of Defense rushed to refute. According to him, China has already won the AI race and is moving toward global dominance thanks to its technological advances, and in 15–20 years the United States will have no competitive advantages left.
But despite the public standoff, the United States and China remain heavily interdependent. China cannot do without Western technology, while the United States and its partners cannot ignore the vast market for their goods and services. That tie is a virtually insurmountable obstacle to ambitions of economic independence.
Autonomous weapons
We are witnessing a new arms race, this time powered by artificial intelligence. Autonomous weapons are already being developed by many countries, including Russia, the United States, China, Turkey and Israel.
In the summer, several media outlets, citing a UN report, reported the first-ever case of a person killed in a military conflict by a drone. Moreover, the autonomous system did not receive an order to eliminate the target, but decided to act on its own.
Throughout the year there were repeated reports of Israel using modern technologies in warfare. The Israeli military became the first in history to use AI and supercomputers in real combat. It later emerged that Israeli forces also deployed a swarm of AI-powered drones in the same operation.
In this context a stark example is the assassination of Iranian nuclear scientist Mohsen Fakhrizadeh using autonomous weapons.
Additionally, AI is increasingly used in intelligence. There is indirect evidence of its use by the U.S. and the United Kingdom.
The UN is seeking to curb intelligent weapons. From 13–17 December in Geneva the Convention on Certain Conventional Weapons held a conference at which the participating states discussed options for regulating smart weapons.
Unusually, China was among the first to call for restricting “killer robots” even though Beijing is already developing such systems. Nevertheless, the states could not reach an agreement.
The metaverses
Although the concept of the metaverses is far from new, talk of it intensified toward the end of 2021. The pandemic and lockdowns played a significant role in accelerating digitalisation across many aspects of daily life.
Facebook also contributed. On 28 October the company rebranded and changed its name to Meta, signalling its commitment to building a metaverse it regards as the “next stage in the evolution of social connections.”
In July, Mark Zuckerberg publicly announced the pivot. He said the company would focus on building a metaverse that runs on VR and AR headsets, mobile devices and gaming consoles, and would no longer be associated solely with the social network. The transformation would take around five years.
Simultaneous with the rebrand, Meta demonstrated realistic avatars, the environments for their existence, and a neural interface, allowing them to be controlled. Digital copies of people and spaces are set to become the foundation of the company’s future metaverse.
Meta is not the only company aiming to build its own virtual worlds. Nvidia, Microsoft, Epic Games and many others have already announced plans.
AI plays a major role in building the metaverse. In the demonstrations presented, Meta and Nvidia employ technologies such as computer vision, speech recognition and synthesis, natural-language understanding, and recommender systems.
Yet full-fledged virtual worlds in which people and objects can interact remain distant. A major constraint on development is the lack of sufficient computing resources. Intel has already said that creating fully fledged metaverses will require processors a thousand times more powerful than those available today.
Alongside the ongoing two-year semiconductor shortage and, it seems, no end in sight in 2022, we are not going to “escape” into virtual worlds any time soon.
Language-model gigantism
In September 2020 the OpenAI outfit generated a stir by presenting a new version of a language model, GPT-3. Competitors were quick to respond: as early as January Google unveiled a language model with 1 trillion parameters, and in May announced its deployment in its products. It is worth noting that the software giant has long applied language models in its services, for example BERT in Search.
In August Microsoft unveiled the MEB neural network for its Bing search engine. According to the developers, it handles “100% of queries in all languages” and is the largest model built at the company.
Two months later Microsoft, in collaboration with Nvidia, developed the Megatron-Turing Natural Language Generation language model with 530 billion parameters. It is designed to generate natural language and is three times larger than GPT-3.
In parallel, Microsoft has been closely collaborating with OpenAI. As early as 2020 the two companies forged a partnership that gave Redmond exclusive rights to use GPT-3. In the summer of 2021 the companies introduced Copilot for GitHub, which allows developers to automate some of the coding process. Four months after launch the firms reported that the tool helped write around 30% of new code posted on the platform.
Russian developers are not lagging behind their Western peers. Yandex developed its own language model with 13 billion parameters, YaML. Based on it, developers created a demonstration service “Balaboba,” where users can ask the neural network to “think up” a sentence or whole passages using only a short prompt.
Other developers around the world have likewise created their own versions of language models. Chinese programmers claimed they created the largest language model in the world WuDao 2.0 with 1.75 trillion parameters, ten times larger than GPT-3.
Despite the growing popularity of transformers and language models, they carry a number of risks. Because the data drawn from the internet can reflect bias, such neural networks may speak in a language of hate and stereotypes. OpenAI acknowledged the problem and pledged to reduce GPT-3’s toxicity.
Moreover, the models often work only with widely used languages such as English, Chinese or Russian, ignoring less common ones.
Autonomous vehicles
The autonomous-vehicles field has seen many positive developments that have influenced the sector’s evolution. Major carmakers are increasingly applying and testing self-driving systems and outlining plans to bring innovative transport to market.
Tesla
The year proved especially productive for Tesla. In a few months the beta testers of Autopilot and Full-Self Driving (FSD) received two major updates to the driver-assistance systems. The electric cars learned to navigate better in the dark and to recognise vehicles with flashing beacons, and overall the stability of autonomous driving improved.
In addition, the company ditched radar and LiDAR in favour of ordinary video cameras. This somewhat limits the Autopilot’s capabilities, as camera sensors can misread objects in the dark and the algorithms may misidentify them.
There were also scandals: in April a Tesla vehicle, reportedly in Autopilot mode, crashed into a tree and caught fire. The accident cost two lives.
Elon Musk, citing data from the car’s systems, said that someone was behind the wheel and that FSD was not active. Nevertheless, regulators opened an investigation and later corroborated the CEO’s statements.
Then the California Department of Motor Vehicles pressed the issue: it opened a case over possible misrepresentation of the company’s advertising. The agency contends that Tesla promises buyers a driverless car, whereas in reality the hands-off system corresponds to only Level 2 autonomy according to the SAE standard. The company itself also confirmed to the regulator that it would not reach Level 5 autonomy by the end of 2021.
The U.S. National Highway Traffic Safety Administration opened an investigation into Tesla after a string of crashes in which at least 17 people were injured and one died.
Waymo
Other firms’ plans are no less ambitious, though they act with greater caution, giving users only limited control over the autopilot. Waymo, Google’s subsidiary, reported that its cars have logged 32 million autonomous miles on public roads. That is currently the outright record among all autonomous-vehicle makers.
In August the company began testing robotaxis in San Francisco, allowing members of the public to ride in the city without a driver. A month later Waymo gained permission for commercial operation of autonomous cars in California.
Yandex
The Russian company Yandex has been actively testing autonomous cars of its own design. In the summer the firm began using autonomous couriers in the suburbs of Saint Petersburg and Moscow, and announced their forthcoming rollout at 250 U.S. campuses. In autumn it began testing a driverless taxi in Moscow.
The company’s efforts have not gone unnoticed: its autonomous-mobility unit has been valued at $7bn and named one of the world’s leading players.
Other players
In China, too, the deployment of autonomous electric cars is advancing. Baidu, the internet giant, has already launched a commercial robotaxi service in Shanghai and Beijing. Drones from the company will also ferry 2022 Olympic participants.
Giant manufacturers such as Toyota, Volkswagen, BMW, General Motors and Ford are actively testing their own autonomous vehicles as well.
The autonomous-vehicles market is at a turning point. Its development is aided, on one hand, by algorithm developers and, on the other, by governments seeking to roll out autonomous systems on city streets: in 2021 governments in Russia, the United Kingdom, Germany and some U.S. states floated such initiatives.
However, today autonomous-driving systems remain imperfect and prone to error. Manufacturers must do a great deal of work to refine algorithms and to guarantee road safety.
Tighter regulation of facial-recognition technology
Europe and the United States
Discussions about regulating AI have persisted throughout the year. Perhaps the most discussed development was the European Commission’s proposed rules designed to restrict the use and development of certain types of software.
Lawmakers proposed banning applications that enable governments to perform social scoring or to exploit children.
High‑risk AI, used for recruitment, critical infrastructure, migration, and law‑enforcement needs, could be subjected to stringent obligations.
The proposal sparked much debate among human rights advocates and some EU regulatory authorities. Often critics argued that permitting facial recognition— even with restrictions—amounted to mass surveillance that intrudes on citizens’ privacy and should be banned outright.
Facial-recognition systems were not spared elsewhere either. In the United States, many municipalities banned or highly restricted their use. New York, San Francisco, Chicago, Boston, Oakland and Minneapolis are not a complete list of American cities left without facial recognition. The technology was restricted in entire states as well.
Clearview AI in the crosshairs
Clearview AI came under intense criticism. Over the past year there have been calls to curb the company’s activities, including legislative moves. U.S. Senator Ron Wyden introduced a bill banning federal agencies from collecting Americans’ biometric data.
Governments in other countries, including Canada, Australia, France and the United Kingdom, criticized the company and urged it to halt the unlawful collection of citizens’ data.
Yet despite political and rights-group protests, Clearview AI not only continued to operate, increasing the number of photographs in its database to 10 billion, but also raised $30 million in a Series B round.
Russia
Russian authorities are also seeking to curb the technology. While many countries discuss and attempt to constrain facial recognition, Russia is actively deploying such systems.
In June Russian media reported plans to unify all city CCTV cameras into a single network. A few days later the interior ministry rolled out in several regions a new system that recognises faces and silhouettes.
Rights advocates argue that biometric-identification systems should be restricted. They contend that ubiquitous facial recognition threatens citizens’ privacy and data security.
The Moscow authorities disagree: the city’s administration said residents’ data are securely protected, with access limited to a small number of officials and agencies.
The Moscow Metro went further, launching a biometric-based system to pay for fares. Passengers who registered via the transport operator’s app must approach a dedicated turnstile and look into the camera. The system will automatically verify the face against the database, deduct the fare, and admit the passenger to the trains.
In the field of AI standardisation, Russian authorities have taken several steps. In mid-year the head of Rosstandart announced plans to adopt more than 200 GOSTs in the AI sphere. Three months later TC 164 “Artificial Intelligence”, which develops AI standards, unveiled the first 36 projects. Rosstandart has already approved 18 of them, most of which will come into force in 2022.
Big business — big responsibility
Facebook’s recommendation algorithms
In 2021, major IT firms faced a string of issues that drew wide public attention. Many used algorithms for purposes not always benevolent, inviting criticism from users and politicians alike.
Arguably the year’s most high-profile scandal centred on Facebook. Shortly before the rebrand, a former employee said the company knew about algorithmic bias, its ability to inflame hatred and hostility, but did not rush to fix the problems. According to her, such content drew greater engagement, enabling Facebook to earn more from advertising.
The social network’s representatives denied the allegations and even agreed to open access to the algorithms to prove their harmlessness. Yet this did not avert consequences. Soon after the documents were released, several countries moved to regulate the operation of recommendation algorithms. Proposals were introduced in Russia, the United States and China.
Amazon’s productivity-tracking programs
In March Amazon warehouse staff complained that the company compelled them to sign surveillance agreements in the vans. Failing that, drivers were fired. The monitoring in vans involves facial recognition and data collection from vehicle sensors. The data are used to calculate worker productivity and to dismiss underperforming drivers.
According to employees, the programme does not account for force majeure not caused by them, and there is no remedy against dismissal. As a result, the California Senate banned the use of productivity-tracking programmes in the state. Similar restrictions could be introduced in New York.
Apple’s CSAM scanner
In August Apple was accused of attempting to pry into users’ smartphones. The corporation announced plans to embed in its operating system a feature that would scan iPhones, iPads and Macs for Child Sexual Abuse Material (CSAM).
The public immediately criticised the move, accusing the company of trying to impose “censorship, surveillance and policing on a global scale.” Despite Apple’s promises not to turn the algorithm into a surveillance tool, the company delayed the feature’s deployment. Critics overlooked that the iCloud Mail service has scanned users’ mail for CSAM since 2019.
Nvidia’s monopoly ambitions
One of the biggest deals in tech history faced potential derailment. In September 2020 Nvidia announced its plan to acquire ARM for $40bn. The news immediately drew opposition from partners of the British company and regulators around the world.
Britain led the charge against the purchase. On 19 April the UK antitrust authority opened an investigation, finding that the acquisition of the British chipmaker “threatens national security.”
In August the regulator launched a second phase of the inquiry, effectively freezing the deal for an unspecified period.
Antitrust authorities in other countries, including the United States, China and the European Union, also assessed the merger’s risks.
ARM’s partners quickly criticised the plans. Google, Microsoft and Qualcomm warned that if the deal closed Nvidia would gain unfair access to technologies and could shut them out. Nvidia rejected the concerns, saying it would continue to work with ARM’s existing partners, and that combining the two chipmakers’ product portfolios and patents would accelerate the development of semiconductor products.
Algorithmic bias
AI algorithms trained on data curated by humans can absorb our stereotypes and worldviews. Scientists say as much.
Throughout the year there were reports from across the globe of errors in AI tools. Some prompted smiles; others raised serious questions about the ethics of deploying AI in various sectors.
Many researchers agree that bias can be addressed. This requires diversifying training data, more thorough testing and a responsive approach to troubling signals.
Ethical codes
A key step in tackling bias in 2021 was the adoption of AI ethics codes by several countries and international organisations.
In October the Russian government, together with major technology firms signed the AI Ethics Code, governing the moral and ethical dimensions of AI use. The document does not compel all local developers to follow its provisions; it merely contains a series of recommendations for development. However, the signatories have pledged to adhere to the code.
China’s government also adopted the ethical-norms. The authorities say this will help Beijing become a world leader in AI by 2030.
In the summer the World Health Organization adopted guiding principles for ethical AI in medicine. The organisation urged responsible AI development to minimise bias against patients.
In November UNESCO signed the first-ever agreement on responsible AI. It was signed by all 196 member states of the organisation.
These are only the first steps on the road to fair and unbiased algorithms capable of meeting the needs of a broad cross-section of users. As AI becomes more pervasive, the challenges will grow—so it is prudent to prepare now to minimise potential harm.
Conclusions
The year that has just ended was packed for AI—impossible to recount all of it in a single piece. Each event described, in its own way, will influence the future not only of the industry but of humanity as a whole.
But what will this world look like? We will surely find out soon.
