Site iconSite icon ForkLog

Robots See Through Walls, Military Adopts Microsoft HoloLens Headsets, and Other AI News

Robots See Through Walls, Military Adopts Microsoft HoloLens Headsets, and Other AI News

We aim to inform readers not only about developments in the bitcoin industry but also about what is happening in related technology fields — cybersecurity, and now the world of artificial intelligence (AI).

ForkLog has gathered the week’s most important AI news.

  • Arm unveiled a new generation of AI-enabled processors — the first in ten years.
  • Microsoft won a contract to supply HoloLens mixed-reality headsets to the U.S. Army. The deal is valued at nearly $22 billion.
  • Researchers from MIT have developed a robot that can “see” through walls.
  • A group of researchers from the University of Toronto and LG AI Group developed an “explainable” AI to tackle the “black box” problem.
  • The number of AI-enabled video cameras in smart cities is projected to reach 350 million by 2025.

Arm Unveils Ninth-Generation AI-enabled Processors

Chip designer Arm announced Armv9 — the first new mobile processor architecture in ten years. The new generation focuses on three core improvements: security, artificial intelligence and overall higher performance.

Armv9 will support Scalable Vector Extension 2 (SVE2), designed for machine learning and digital signal processing tasks. SVE2 is expected to deliver substantial performance gains for virtual and augmented reality, as well as image processing and speech recognition.

The new generation will support Confidential Compute Architecture — a hardware security environment that is created dynamically and protects users’ sensitive data from the rest of the system.

Arm also promises overall performance improvements of more than 30% through software and hardware optimisations. The company said all programs built for previous architectures will run without issues on Armv9.

Partners can expect shipments of the first batches by the end of the year.

Boston Dynamics Unveils Stretch, a Robot for Warehouse Automation

The robotics maker Boston Dynamics unveiled Stretch, a robot designed to move boxes to automate warehouses and distribution centres.

The robot has a square mobile base on wheels, enabling it to move across loading docks, maneuver in tight spaces and adapt to changing layouts.

To move boxes Stretch uses a large suction-based manipulator that grips the payload.

The company also says the robot is equipped with Boston Dynamics’ proprietary computer-vision technology. With it, Stretch can recognise different box types without retraining the model for each user.

In an hour the robot can move 800 boxes weighing up to 23 kg each. High-capacity batteries allow it to operate for eight hours between charges. No special training is required for an operator to control the robot.

The company is currently seeking partners to participate in pilot testing. They have not disclosed the price of Stretch, but plan to start selling the device in 2022.

Microsoft to Supply U.S. Army with 120,000 Augmented-Reality Headsets in a $22 Billion Contract

won a U.S. Department of Defense contract to supply augmented reality headsets. The deal, valued at about $21.88 billion, calls for shipping 120,000 devices over the next decade, built on HoloLens technology.

This is not the first contract between Microsoft and the DoD. In 2018 the company and the agency signed a contract worth $480 million under which software giant’s specialists were to deliver modified MR headsets tailored for military use.

The enhanced headsets combine a range of sensors, with information projected onto an integrated display.

Back then, some employees expressed dissent and signed an open letter to company leadership urging not to sell technologies to the military.

CEO Satya Nadella responded to critics, but this did not prevent the company and the agency from collaborating on the updated headset.

Researchers Create a Robot Capable of Seeing Through Walls

Researchers at the Massachusetts Institute of Technology have built a robot capable of seeing objects through obstacles.

The team augmented the classic computer-vision system with radio-frequency identification (RFID) technology, the signal of which can penetrate even walls.

Object recognition through obstacles works as follows: a special RF tag is attached to the tracked object, containing detailed information about the item. The reader on the robot then sends a radio signal that the tag reflects back along with the information contained in it, allowing the robot to determine the item’s exact location.

Scientists believe that combining RGB and RFID sensors will help robots navigate multi-room environments and easily locate items. They say their development could assist warehouse movement or help a machine pull a screwdriver from a cluttered tool set.

Details about the device will be presented at the IEEE International Conference on Robotics and Automation, which will take place in May.

Google Duplex Automatic Booking by Phone Goes Live in 49 States

launched an AI service for automatic bookings by phone, Duplex, across almost the entire United States, with Louisiana excluded. The company said expanding the service in the state is blocked by local law.

Duplex was unveiled at Google I/O in 2018. As a supplement to Google Assistant, the service can call local businesses and reserve tables at restaurants or book a haircut appointment on behalf of the user, using a realistically sounding artificial voice.

Over the past two years the company tested the service in several states. During this time Duplex has faced several scandals.

For example, in 2019 it emerged that 25% of calls via Duplex were made by people, and 19% began with a robotic system but ended with human intervention. Also, many calls by the AI assistant were recognized by restaurants as spam.

However Google stated that as of October 2020, 99% of calls made using the service were fully automated.

Researchers Near Solution to the AI ‘Black Box’ Problem

Researchers from the University of Toronto and LG AI Research developed an explainable AI model (XAI) that can shed light on how machine learning makes decisions.

According to researchers, such methods will allow engineers to solve the ‘black box’ phenomenon — when developers do not know how the algorithm arrived at a given conclusion.

For example, a computer-vision model trained to recognise specific objects in images determines which aspects to attend to and which to ignore. Its creators have no idea how exactly it arrives at the result.

Developers call XAI a “glass box” that makes the decision process transparent. The algorithm runs alongside the ML model and checks recognition accuracy. This approach also enables debugging and assessing training effectiveness.

Comparison of the team’s explainable AI algorithm (SISE, far right) with other explainable models. Data: study.

Scientists believe that their development could help reduce algorithmic bias, which is especially important in fields like medicine where inaccuracies can cost a patient’s life.

“To fully understand and interpret a model’s prediction, a physician must know how the algorithm arrived at it,” added Mahesh Sudhakar, a University of Toronto alumnus and member of the research group.

Study: AI Cameras in Smart Cities Will Reach 350 Million by 2025

By 2025, 350 million AI-enabled cameras will be in operation in smart cities worldwide, according to a study by ABI Research.

Experts believe that by then AI chips in cameras will be mainstream, and on-device deep learning will help city and law enforcement agencies manage transit, monitor pedestrians, detect threats, and enhance security.

They also highlighted two technologies that will catalyse AI in cameras in the coming years.

The first is the widespread deployment of edge devices, which will allow video data to be processed where it is generated. This reduces load on communication channels since devices won’t need to send gigabytes of video data to servers for processing.

The second is 5G networks. They noted that operators will not manage to deploy 5G broadly in the coming years. However, targeted deployments will provide reliable and seamless connectivity to city infrastructure, which is precisely what is needed for machine learning.

Analysts do not rule out that smart-camera integrators may face challenges deploying total-surveillance devices in cities. Foremost among these are public opposition and civil-liberties groups, which have raised concerns about the transparency of computer-vision technologies, especially facial recognition.

Also on ForkLog:

Subscribe to ForkLog updates on VK!

Exit mobile version