
Anthony Levandowski closes his Church of AI; AI algorithm predicts COVID-19 mortality, and other AI news
We aim to inform readers not only about events in the Bitcoin industry but also about developments in adjacent technology realms—cybersecurity, and now the world of artificial intelligence (AI).
ForkLog has assembled the most important AI news from the past two weeks.
- The world’s first Church of AI was closed by Anthony Levandowski, a former Google engineer.
- The Minneapolis City Council voted to ban its police department from using facial-recognition technologies.
- In the United States, an AI algorithm determined that COVID-19 incidence was three times higher than official statistics.
- Facebook’s AI moderator became better at understanding and flagging abusive comments on the company’s platforms.
- Baidu will open a plant to manufacture AI chips for autonomous vehicles.
- In 2020 the U.S. Customs and Border Protection did not detect a single impostor at airports using facial-recognition systems.
- Spanish researchers developed an algorithm that can locate marine litter in aerial images.
Minneapolis bans police from using facial recognition
The Minneapolis City Council voted for a resolution barring the city’s police from using facial-recognition software. The decision was approved by 13 council members, with no votes against.
Under the new ban, the Minneapolis Police Department cannot use any facial-recognition technology, including facial-recognition software from Clearview AI. The company sells access to a large database of facial images, many of which are scraped from social media, to federal law-enforcement agencies, private companies, and a number of U.S. police departments.
Earlier media reports stated that the Minneapolis Police Department had maintained ties with Clearview AI, as did the office of the Hennepin County Sheriff, which will not be bound by the new ban.
Minneapolis is not the first U.S. city to restrict facial-recognition technology. In 2020, Portland, Oregon, passed laws prohibiting city bureaus from using facial recognition and prohibiting private companies from deploying the technology in public spaces. San Francisco, Oakland, and Boston had previously banned facial recognition.
Restrictions on the use of AI-based facial-recognition systems nationwide are linked not only to concerns that the technology disproportionately targets minority communities, but also to observed deficiencies in recognizing non-white faces.
Google appoints head of AI division
Google appointed Dr. Marian Croak, formerly vice president of engineering, as the head of the AI division within Google Research.
In her new role, Croak will oversee teams working on accessibility, AI for public good, algorithmic health equity, ethical AI, and other initiatives. She will report to Jeff Dean, Google’s senior vice president of Google AI Research and Health.
Marian Croak has decades of experience across technologies—from voice over IP to holding more than 200 patents in various fields. For the last six years she has been a Google vice president, contributing to projects ranging from site reliability engineering to providing public Wi‑Fi on Indian railways.
AI trained to forecast COVID-19 mortality with up to 90% accuracy
Researchers from the University of Copenhagen’s Department of Computer Science developed an AI algorithm that can determine with up to 90% accuracy whether an uninfected person will die from COVID-19. After a person infected with the virus is hospitalised, the algorithm can predict with 80% accuracy whether they will require mechanical ventilation.
The researchers trained the algorithm using health data from 3,944 Danish COVID-19 patients, teaching the computer to recognise patterns and correlations in patients’ conditions before infection and during the disease’s active phase.
“Our results show that age and weight are the most decisive factors in how severely someone will be affected by COVID-19. But the probability of death or the need for mechanical ventilation also rises if you are male, have high blood pressure, or have a neurological condition,” said Mads Nielsen, professor of computer science at the University of Copenhagen.
Researchers are currently working with the capital region of Denmark to translate these findings into practice.
Algorithm suggests COVID-19 infections in the US are three times higher than official counts
Researchers at the University of Texas Southwestern Medical Center developed an algorithm that estimates COVID-19 infections in the United States to be nearly three times higher than the official tallies. By February 4, more than 71 million people in the U.S. had been infected, compared with 26.7 million confirmed cases. The model also estimated that 7 million people in the country were currently infected and potentially contagious.
“Estimates of real infections are the first to reveal the true scale of COVID‑19 in the U.S. and around the world,” said the study’s author Jangseok No.
But he concedes that these estimates are approximate, given uncertainties around factors such as the infection fatality rate. He notes they are likely more accurate than the confirmed cases used to guide public-health policy.
IBM and Palantir unveil no-code/low-code AI app platform for enterprises
IBM and Palantir announced a partnership to combine hybrid cloud, AI, data processing and operations technology in a new enterprise offering.
Palantir says the solution for IBM Cloud Pak for Data will simplify the creation and deployment of AI-powered applications using IBM Watson, helping users access, analyse and act on large volumes of data dispersed across hybrid cloud environments without requiring deep technical know-how.
IBM says the offering will serve as a no-code/low-code platform for deploying AI applications that can process data efficiently, extend existing enterprise systems and accelerate digital transformation.
Palantir for IBM Cloud Pak for Data is slated for public release in March 2021.
U.S. CBP facial-recognition failed to spot a single impostor at airports in 2020
In 2020, the U.S. Customs and Border Protection (CBP) used facial-recognition tools to process more than 23 million travelers at more than 30 entry points and did not detect a single instance of someone passing themselves off as another individual at the airport.
According to the agency’s 2020 fiscal-year report, the biometric identification system via facial portraits detected fewer than 100 impostors at pedestrian crossings. This may indicate either a very low incidence of fraudulent entries or that the CBP’s biometric-ID system is ineffective at catching them.
The entry-exit tracking system has been in operation since 1996, and the Patriot Act of 2001 [simple_tooltip content=”Federal law enacted in October 2001 in the United States that gives the government and police broad powers to monitor citizens”]Patriot Act[/simple_tooltip] broadened biometric requirements, including fingerprints, iris scans or photo identification. CBP led the initiative to use facial-recognition technologies under this program in 2013.
Baidu plans to establish AI-chip manufacturing
Chinese IT giant Baidu plans to open a subsidiary to manufacture chips for AI computing and related workloads, which could boost China’s efforts to develop a domestic semiconductor industry. CNBC reports that the search giant is in talks with venture firms GGV and IDG Capital about investing in the new venture.
The firm would sell chips to clients across various sectors, including automakers, and could support Baidu’s work on electric and autonomous vehicles.
The news comes as China seeks to strengthen its domestic chip-making capability. The country’s dependence on foreign equipment was highlighted by the trade war with the United States, which last year imposed restrictions on the sale of microchips and components to China.
China has set a target to produce 70% of its semiconductors domestically by 2025. Notably, in 2019 only 16% of chips were domestically produced.
Facebook sharpens AI for detecting abusive comments
Facebook said its AI now does a better job at identifying bullying and harassment in comments. Between October and December last year, the company nearly doubled the number of posts removed for violations of its policies.
In Q4 last year Facebook processed 6.3 million items of content, compared with 3.5 million in the prior quarter. A similar rise was seen on Instagram, where removals increased from 2.6 million to 5 million. Facebook also noted a marked improvement in proactively detecting abuse before users report it. Its proactive rate of abuse and harassment nearly doubled—from 26% to 49%.
Facebook highlighted the effectiveness of its AI-based moderation, saying its automated systems are better at identifying abuse and harassment in comments.
“Historically this has been a challenge for AI because deciding whether a comment violates our policy often depends on the context of the post it responds to,” wrote Facebook’s chief technology officer Mike Schroepfer in a blog post.
He said Facebook’s AI has also become better at parsing the context of memes and other forms of “mixed media.”
Over the past year the company has relied more on its moderation tools, as many moderators could not work in the office due to the pandemic.
Researchers build AI to detect and reward “good dogs”
Two Colorado State University researchers developed an AI system that determines when a dog sits, stands or lies. If the furry friend assumes the correct position on command, the system rewards the dog by automatically dispensing a treat.
Jason Stock and Tom Cavi, PhD computer-science students, trained the algorithm using more than 20,000 images of dogs in various poses and ran it on an NVIDIA Jetson single-board computer.
Engineers note the pose-detection accuracy is 92% at roughly 39 frames per second. They suggest the system could be used by trainers and dog owners for teaching and remote pet monitoring.
Subscribe to Forklog’s channel on YouTube!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!