Telegram (AI) YouTube Facebook X
Ру
The Dark Era of AI: Why AI Ethics Matter

The Dark Era of AI: Why AI Ethics Matter

Over the past six years, artificial intelligence technology has gained enormous popularity. Developments in this field could revolutionise our daily lives.

Against the backdrop of the AI fervour, concerns about the ethical consequences of its development are also being voiced. The discussion has been fuelled by the rise in popularity of services like ChatGPT or Stable Diffusion, capable of generating plausible texts and images.

According to some experts, the risks are not limited to potential disinformation attacks and extend beyond generative services, spreading to almost all areas of human activity.

ForkLog has examined the main concepts of ethical AI and sought to determine whether this is truly important for the development of safe artificial intelligence.

  • The debates about AI ethics have continued since the technology’s emergence.
  • Algorithms interacting with people must be checked for risks in accordance with generally accepted norms of morality.
  • Artificial intelligence does not need to be made to resemble humans.
  • Scientists, tech giants and governments converge on the core principles of ethical AI.

Definition of AI ethics and the history of the term

Ethics, in itself, is a philosophical discipline that studies morality and right and wrong. In other words, it explains what is good and what is bad.

AI ethics, in turn, is the set of principles Responsible AI — how it should act and how it should not. These principles often imply safety, fairness, and a human-centred focus.

Among the first to raise the issue of “machine morality” were writers working in science fiction. In their works they pondered robots’ behaviour, their minds, feelings and interaction with humans. This strand came to be known as ‘roboethics’.

Isaac Asimov is commonly regarded as the populariser of the phenomenon—the author of the short story ‘Runaround’. In it, the writer formulated the ‘Three Laws of Robotics’. Later, in the novel ‘Robots and Empire’, Asimov added a Zeroth Law to the list. They read as follows:

  1. The robot may not, by its actions, cause harm to humanity, or, through inaction, allow humanity to come to harm.
  2. The robot may not, by its actions, harm a human, or, through inaction, allow a human to be harmed.
  3. The robot must obey the orders given by human beings except where such orders would conflict with the First Law.
  4. The robot must protect its own existence as long as it does not conflict with the First or Second Law.

Asimov’s laws gained popularity and spread far beyond his literary works. They are still discussed by writers, philosophers, engineers and policymakers.

According to a number of critics, such norms are hard to implement in practice. The science fiction writer Robert Soyer once stated, that business is not interested in developing fundamental safety measures.

But transhumanist Hans Moravec holds the opposite view. According to his statements, corporations using robots in production can use a “complex analogue” of these laws to control a technical process.

The rules formulated by Asimov were also used by government structures. In 2007 the authorities of South Korea began developing the ‘Ethical Code for Robots’. The main provisions of the document resemble the postulates laid out by the writer.

With robots used in the military sphere, the issue became more pressing. In 2021, the UN recorded the first killing of a person by an autonomous device. A drone with artificial intelligence independently identified a target, tracked it, and decided to eliminate it.

Following the popularisation of generative algorithms such as GPT or DALL-E, discussions about AI ethics have widened. Governments of various countries are discussing potential risks and outlining plans regulation of the technology, while tech giants are actively developing internal teams to build safe AI.

“The Trolley Problem”

The development of technologies that directly affect people’s lives gives rise to a host of ethical and moral dilemmas. The most striking example can be described with autonomous cars: how should a robotaxi act in an emergency? Endanger the life of the passenger or the pedestrian?

This dilemma is well illustrated by the thought experiment known as the ‘Trolley Problem’, formulated in 1967 by the English philosopher Philippa Foot. The gist of the problem is as follows:

“The trolley has lost control and is hurtling along a railway track. Ahead on the tracks are five people tied up and immobilised. You stand at a certain distance from the scene, near the switch. If you flip the switch, the trolley will run onto the adjacent track, where one person is tied up.”

You have no means to influence the situation. There are only two options: do nothing and the trolley will kill five people, or flip the switch and sacrifice one life. Which option is the right one?

\"Problema-vagonetki-1\"
“The Trolley Problem”, described by Philippa Foot. Data: ForkLog.

\n\n

Most moral principles can be reformulated into similar tasks and subjected to a series of experiments. In real life there may be many more options, and the consequences less catastrophic. Yet the choice made by an AI system will, in one way or another, affect lives, and it is impossible to eliminate all risks.

AI should not learn from humans

\n

Although moral and ethical principles are formed by humans, AI should not resemble humans, because the algorithms may pick up biases inherent in society.

This is well illustrated by facial recognition technology. To create relevant systems, large datasets of images of people are collected. Often automated bots stand behind the process, which do not take sampling diversity and context into account. As a result, the dataset may be biased from the outset.

The next stage of development is model building and training. At this stage, biases and distortions may also creep in, including due to the ‘black box’ problem. Engineers simply do not know why the algorithm arrives at a particular decision.

Risks of bias also exist at the interpretation level. However, at this stage, humans play an even bigger role. For example, the algorithm is trained on a specific sample of people and yields specific results. Then the system is applied to a completely different group—the data changed drastically.

In 2021, Dutch activists demonstrated the presence of distortions in the social-provision distribution system. According to them, its use violated human rights. As a result, municipal authorities abandoned the use of the software.

Developers themselves report data portability problems. In March 2023, a group of researchers developed an algorithm forecasting the risk of pancreatic cancer. To train the system, the researchers mainly used medical records from Danish clinics.

Researchers warned that the algorithm’s accuracy when projected onto U.S. patients drops significantly. Because of this, the algorithm needs to be retrained on American data.

Popular principles of ethical AI

Research groups, human rights advocates, tech giants and governments propose different approaches, yet they share a number of similarities. Here are the most popular views.

Fairness and non-discrimination

AI systems should be developed in a way that minimises discrimination on the basis of race, gender, religion or other protected characteristics. Biases in algorithms can reinforce existing inequalities and harm certain groups.

For example, facial recognition systems showed bias in accuracy, with higher error rates for certain racial and gender groups. In 2019, a Black resident of New Jersey (USA) spent 10 days in prison due to an algorithm error. Similar cases occurred in other cities, for which police faced a wave of criticism over the use of biased technologies.

Ethical AI methods can help detect and mitigate such biases, contributing to fairness in decision-making.

Transparency and explainability

Transparency and explainability are crucial for building trust in AI. Systems should be designed so that their processes and decision-making outcomes are understandable to all stakeholders.

Generative AI developers are considering the possibility of embedding watermarks to mark content generated by algorithms. They argue this will raise user awareness and prevent the spread of misinformation.

Another example of transparent AI use is Illinois’ biometric privacy law. The document prohibits companies from using technologies such as facial recognition without explicit user notification.

Within the framework of the law, companies such as Google and Clearview AI have already faced multimillion-dollar fines or restrictions on their operations.

Proponents of this approach argue that transparency promotes responsible decision-making and prevents misuse of AI systems.

Accountability and responsibility

Stakeholders should be accountable for the development and use of AI systems, as well as their potential impact on people, society and the environment. They must ensure the technology is used for beneficial purposes.

This includes considering the ethical implications of deploying AI systems and actively participating in decision-making throughout the development and deployment process.

Privacy and data protection

AI systems often rely on personal information, and safeguarding privacy is a key factor. Ethical AI methods include obtaining consent for data collection and use. It is also necessary to ensure control over how service providers handle users’ information.

This ensures respect for privacy rights and the protection of personal information, experts say.

For this reason, in late March 2023, ChatGPT blocked in Italy. The local regulator accused the developers of illicit collection of personal data, which forced OpenAI to adopt more transparent methods of operation.

Likely to avoid similar problems, Google’s Bard rollout worldwide spared not only the EU, where GDPR applies, but the entire European continent.

Human-centredness

Ethical AI places human-centredness at the forefront. This means that algorithms should be designed to augment human capabilities, rather than replace them.

The principle includes considering the potential impact of AI on jobs, social dynamics and overall well-being, and ensuring that the technology aligns with human values.

Business and the Law

Adhering to ethical AI norms is not only responsible conduct. It is also necessary to extract commercial value. Ethical questions can pose business risks, such as production failures, legal issues or reputational damage to the brand.

In theory, the use of surveillance algorithms for delivery workers at Amazon could safeguard drivers. However, workers proved unhappy about surveillance abuse, which caused a media stir, and top management had to defend themselves.

Meanwhile, the Screenwriters Guild announced the largest strike in 15 years, partly connected with the spread of generative AI. Writers are demanding to ban or substantially restrict the use of the technology in writing and to rethink salaries.

How the current protest will end is unknown. However the 100-day strike in 2007–2008 cost the California budget $2.1 billion.

One reason for such events may be the unchecked development and use of algorithms. Yet individual countries and even cities are applying their own approaches to the moral dimension of the issue. This means that companies need to know AI rules at the jurisdiction level in which they operate.

Bright examples include the provision “Right to Explanation” in GDPR and the corresponding sections of the California Privacy Act.

Some American cities at the local level have decided to restrict the use of algorithms. In November 2022, the New York authorities banned biased AI systems for recruitment. Earlier, the city enacted a regulation obliging private establishments to ensure the privacy of visitors’ biometric data.

One of the largest regulatory initiatives in AI is the set to be adopted AI Act in the European Union.

***

Ethical AI matters in the development and deployment of technology. It can ensure justice, transparency, privacy and a human-centred orientation for the beneficial development of society.

Beyond this, applying guiding principles can provide commercial benefits for companies, including preventing concerns among employees, customers and other stakeholders.

Debates around AI can develop into constructive dialogue. Excessive regulation could hamper the technology’s development and ruin its untapped potential. Therefore there is a need to find well-calibrated solutions for the ethical use of AI that satisfy as many people as possible.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK