Site iconSite icon ForkLog

Algorithm models the Universe, a bot colours photos, and other AI news

Algorithm models the Universe, a bot colours photos, and other AI news

We aim to inform readers not only about events in the Bitcoin industry, but also to report on what is happening in adjacent technological spheres — cybersecurity, and now in the world of artificial intelligence (AI).

ForkLog has gathered the most important AI news from the past week.

  • IBMunveiled the world’s first 2-nanometer process for manufacturing energy-efficient chips.
  • One of the co-founders of ARM said Nvidia intends to wage unfair competition.
  • In China, researchers created the world’s largest high-resolution satellite image dataset FAIR1M.
  • Scientists developed an AI that speeds up Universe simulations by 1000 times.
  • Researchers from Columbia University created an algorithm that identifies early signs of dementia in elderly drivers by analyzing their driving style.
  • A U.S. congressman joined the OpenAI board of directors.
  • A Dutch developer created an algorithm for recognizing emotions in livestock.
  • Google presented a neural network that transfers portrait photos to a new background and realistically adjusts the lighting to match it.
  • A student from Ecuador created a Twitter bot that colours black-and-white photos using AI.

IBM unveils 2-nm process for chip fabrication

Tech giant IBM unveiled the 2-nanometer process for chip fabrication.

Compared with current 7-nm solutions, the new technology offers up to a 45% uplift in performance or a 75% reduction in energy consumption.

Among potential advantages, engineers highlighted:

IBM also noted that the 2-nm process could fit 50 billion transistors on a chip the size of a fingernail.

No timelines for commercial deployment were announced by the company.

China builds world’s largest high-resolution satellite image dataset

Chinese researchers created the world’s largest high-resolution satellite image database FAIR1M. They say the dataset will enable AI to identify objects with accuracy down to determining the function of a structure or the model of an airplane.

Example of a labeled image. Data: Chinese Academy of Sciences.

The new database contains 15,000 images across 1 million “scenes.” With it, AI can not only identify objects on the Earth’s surface, but also relate them and determine context.

80% of the images were obtained from Gaofen Earth-observation satellites. The remainder were collected from open sources, such as Google Maps.

The FAIR1M database was developed for civilian purposes. Access will be available to all who wish to use it.

Researchers build AI algorithm that detects early signs of dementia in elderly drivers

Columbia University researchers developed an AI algorithm that identifies mild cognitive impairment and dementia in older adults depending on their driving style.

Researchers used data from on-board devices that recorded the behaviour of 2,977 drivers aged 65 to 79. Based on this, the authors derived 29 behavioural variables used to create cognitive profiles.

They then built a machine-learning model to forecast cognitive problems, achieving 88% accuracy.

Scientists hope their work will help safeguard road users and elderly drivers.

US congressman joins OpenAI board

American congressman Will Hurd joined the board of the non-profit OpenAI.

Hurd is one of the leading AI policy experts and helped draft the legislation defining the national AI strategy, approved in 2020.

“We look forward to Will applying his unique experience to help fulfil our mission to develop and deploy AI for the public good,” said Greg Brockman, chair and CTO of OpenAI.

Researchers build an algorithm to recognise emotions in domestic livestock

A researcher from Wageningen University (the Netherlands) developed an AI system that identifies and measures the emotional state of farm animals using facial recognition.

The algorithm, in real time, identifies nine emotional states, including aggressive, calm or neutral. Its accuracy stood at 85%.

Nine emotional states of animals. Data: the study.

According to the developer, the system will help farmers monitor livestock in real time and reduce the number of invasive procedures used to determine its condition, such as hormone sampling.

Google develops neural network for realistic transfer of selfies to a new background

Google trained a neural network to transfer portraits to a new environment and realistically adjust the lighting.

The algorithm identifies a person in the photo, separates them from the background, and places them in the new setting. It then analyses the lighting on the subject and the background, creates light and environment maps, and then aligns them with each other.

To train the network, developers used data gathered for another study. They built a spherical pavilion, installed 331 LED lighting blocks, 42 ordinary cameras and 16 depth cameras. They then photographed 70 volunteers of different gender, age and skin tone inside the structure. In total, the researchers obtained around 8 million portraits with varied lighting.

Details on where the algorithm will be used were not disclosed. They will speak more about it at SIGGRAPH 2021.

Musicians urge Spotify not to use voice-recognition technology

A group of musicians and rights advocates urged the company Spotify not to use its patented voice-recognition system that determines “emotional state,” age and gender of the user to tailor playlists.

They worry that this could enable Spotify or any other company deploying the technology to manipulate users, collect their personal information and discriminate against vulnerable groups. Musicians also believe that the technology would amplify economic inequality in the industry.

Spotify did not comment on the letter. Earlier, company managers stated that they do not plan to use the patented technology in their services.

Student creates a Twitter bot that colours images using AI

A 21-year-old student from Ecuador, Giovanni Zambrano, created a Twitter bot that colours black-and-white photos.

To use it, a user must upload a photo to the social network and mention the bot in replies. Within 30 seconds, the algorithm will return the photo, but in colour. The bot responds to only one user request per hour to avoid overloading the network.

Example of the algorithm’s photo-colouring. Data: Twitter.

The algorithm is based on a generative adversarial network. It can also colour videos and GIF animations, though this takes substantially longer.

Deepfake shows how the first Mortal Kombat adaptation with Van Damme in the lead might have looked

A blogger used a deepfake and neural networks to superimpose Jean-Claude Van Damme’s face on Linden Ashby, who played Johnny Cage in the 1995 film Mortal Kombat.

Interestingly, the Hollywood action star was initially offered the role of the martial-arts mentor in the cult film, but he declined in favour of participating in Mortal Kombat’s Street Fighter adaptation.

Also on ForkLog:

Subscribe to ForkLog’s AI news on Telegram: ForkLog AI — all AI news!

Exit mobile version