Site iconSite icon ForkLog

Automation has pushed wages down, AI finds missing person, and other AI news

Automation has pushed wages down, AI finds missing person, and other AI news

We aim to inform readers not only about events in the bitcoin industry but also about developments in related technology sectors — cybersecurity, and now artificial intelligence (AI).

ForkLog has gathered the most important AI news from the past week.

  • Automation and artificial intelligence have driven middle- and working-class wages in the United States down by 50-70% over the last four decades.
  • Tesla showcased one-third of its future supercomputer, which could become the second-most powerful computing device on the planet.
  • Hyundai bought Boston Dynamics for $1.1 billion.
  • A neural network found a missing man in the woods.
  • Nvidia introduced an algorithm turning a user\’s 2D photo into a “speaking head” for video meetings.
  • Google will develop an alternative to the Fitzpatrick skin-tone scale for testing face-recognition systems for dark-skinned users.
  • An unmanned trimaran broke down and returned to the United Kingdom.

AI has driven wage declines in the United States

Artificial intelligence and automation of tasks have driven the wages of the middle and working classes in the United States down by 50-70% over the past four decades. говорится in a report by the U.S. National Bureau of Economic Research.

Experts say AI, robotics and new technologies are driving economic inequality, and the problem is gaining momentum. By their account, incomes of those with higher education are rising, while wages for workers without a high school diploma have fallen by 15% since 1980.

Changes are driven by companies\’ push to automate tasks previously performed by humans, the study says.

Tesla will develop one of the world\’s most powerful supercomputers

Tesla\’s AI chief Andrey Karpathi рассказал about the Dojo supercomputer under development.

According to him, the current installation comprises 720 nodes, each with eight NVIDIA A100 accelerators. In total, this is 5,760 devices, and their combined performance reaches 1.8 exaFLOPS. He noted that the Dojo supercomputer is only a third complete.

Specifications of one of Dojo\’s three clusters. Data: Tesla.

Karpathy said the existing installation would enter the TOP500 list in fifth place, and a full Dojo would be second.

Earlier Elon Musk had said the company is developing a supercomputer for Tesla Autopilot based on computer vision.

Hyundai acquired Boston Dynamics

Hyundai closed a deal to buy 80% of Boston Dynamics\’ shares for $1.1 billion. The remaining 20% will remain owned by SoftBank.

Details of the agreement were not disclosed. Hyundai said it would safeguard the company\’s existing products and develop new ones.

Boston Dynamics was founded at MIT in 1992.

AI deployed to monitor California wildfires

In Sonoma County, which was badly affected by fires in 2020, they rolled out 800 surveillance cameras and installed a South Korean computer-vision system for around-the-clock forest monitoring and detection of ignition points.

The algorithms operate in real time to recognise smoke and trigger an alert to the monitoring-station operator.

The system went live in May 2021. In its first week it issued 60 alerts with a false-alarm rate of 0.08%. On one occasion the algorithms detected smoke ten minutes before the first observer phoned the rescue services.

If the system proves effective, it will be rolled out across the state.

Neural network finds a missing man in the forest

The neural network, based on drone photographs, found a missing man in the forest.

The algorithm, developed by VimpelCom (operating as Beeline), was used by the Lisa Alert search-and-rescue squad. With a drone, volunteers scan the terrain and locate missing people on the footage.

According to the company\’s executive vice-president for digital and new business development, George Held, locating a target in drone images takes volunteers a day, whereas the neural network does it in minutes. He added that thanks to the algorithm the squad finds at least one missing person each week.

Nvidia introduced an algorithm that creates a \’speaking head\’ from a photo for videoconferences

The company Nvidia представила the Vid2Vid Cameo AI model that converts a two-dimensional photo of a person into a video with a \’speaking head\’.

The algorithm runs on generative adversarial networks. The model was trained to identify 20 key facial points on a person from a dataset of 180,000 videos. They are extracted from the uploaded image to create a video that mimics the person\’s appearance.

Nvidia said the technology is intended to improve the quality of videoconferencing and to reduce bandwidth by up to 10 times, according to Nvidia.

In the near term, Vid2Vid Cameo will become part of Nvidia Maxine SDK and Nvidia Video Code SDK. A demo version of the algorithm that changes head and eye position доступна on the project site. There the developers published the research results.

Google to develop skin-tone classification for facial recognition

Google will develop its own scale of skin tones for testing AI-powered applications and services.

Currently the company, like many others, used the Fitzpatrick skin-tone scale developed in the 1970s. However, researchers from the US National Security Agency advised moving away from it, particularly when testing facial-recognition systems.

Facebook also believes the Fitzpatrick scale does not cover the diversity of skin tones.

Nvidia unveils a tool to turn sketches into photorealistic landscapes

The company Nvidia released the Canvas tool for generating photorealistic landscapes from a sketch, based on GauGAN.

Process of working in the Canvas editor. Data: Nvidia.

For image creation, users have 15 materials, such as grass, fog or snow, and nine styles, which affect lighting and other details.

Canvas images can be split into layers to edit each part separately.

Anyone can try the tool, but it requires a Nvidia GPU with RT cores for ray tracing (RTX).

Enthusiast uses neural networks to copy a working GTA V fragment

The creator of an AI algorithm education channel, sentdex, built a GAN Theft Auto neural network that reproduced a fragment of GTA V.

Result of GAN Theft Auto. Data: sentdex.

He trained the algorithm on hundreds of hours of data from a small GTA V area. In the end the neural network created a semblance of a game engine, where cars move, respond to obstacles and crash into walls.

The author stressed that he did not aim for photorealistic graphics. The goal was to reproduce the game fully using a neural network.

Unmanned trimaran Mayflower returns to the UK

The unmanned trimaran Mayflower, which set sail from Plymouth in Britain to Massachusetts, has returned to the United Kingdom after a malfunction was detected.

Mayflower\’s return route to Plymouth. Data: BBC.

Project leaders say such a mechanical fault can occur on any vessel, but it prevented the craft from achieving the required speed.

They also noted that AI systems performed well.

After repairs, Mayflower will sail again. No timeline for the fix has been announced.

Also on ForkLog:

Subscribe to ForkLog news on Telegram: ForkLog AI — all AI-news!

Exit mobile version