
Robot testifies; DALL-E 2 arrives in Microsoft 365, and other AI news
We aim to inform readers not only about events in the bitcoin industry, but also about developments in related technology sectors—cybersecurity and the world of artificial intelligence (AI).
ForkLog AI has gathered the most important AI news from the past week.
- Meta unveiled the VR headset Quest Pro.
- Microsoft announced a range of new features for its cloud service and Office apps.
- Google AI introduced a model for recreating music from a snippet.
- iPhone 14 owners complained of false activation of the crash-detection alert while riding amusement park attractions.
- Image generators were used to dynamically render VR worlds.
- In Japan, robo-cars were equipped with ‘eyes’.
- The most notable AI deals of the past week.
Meta unveils the Quest Pro VR headset
Meta unveiled the Quest Pro VR headset with eye-tracking and facial-expression tracking. The announcement took place at Connect 2022.

The headset is equipped with ten cameras facing both inward and outward. Some sensors track the user’s face and transfer their expressions to their digital avatar. Other sensors capture images of the surrounding world for visualization in the virtual space.
The device is powered by a Qualcomm Snapdragon XR2+ processor, 12 GB of RAM and 256 GB of built-in storage. The headset features two displays with a resolution of 1800×1920 pixels per eye and a refresh rate of 90 Hz.
The Quest Pro will go on sale on October 25, 2022, at a price of $1,500.
The company also entered into a partnership with Microsoft to bring Teams, Office, Windows and Xbox Cloud Gaming to virtual reality.
In addition, Meta chief Mark Zuckerberg spoke about developing new AR glasses in partnership with Ray-Ban.
New AI features announced for Azure and Microsoft 365
At the Ignite 2022 conference, Microsoft unveiled a series of improvements for the Azure cloud platform. Among them:
- a ready-made AI model for Form Recognizer that automatically extracts the key data from contracts;
- integration of the DALL-E 2 algorithm into Azure OpenAI for generating creative content;
- enhanced capabilities for understanding and summarising information from documents, and language support for Cognitive Service for Language;
- new speech-synthesis and transcription-from-conversation features for Speech Service;
- option to extract visual features from images to manage digital assets in Azure Computer Vision;
- automatic code generation from text using OpenAI Codex for Power Automate.
Also at the Surface event, Microsoft unveiled AI products for Microsoft 365, including the Syntex document-processing service and Designer app with DALL-E 2 text-to-image generation support.
Google AI unveiled a model for reconstructing music from a snippet
The Google AI division unveiled AudioLM, capable of reconstructing a musical composition from a short excerpt.
The developers said the algorithm requires only a few seconds of audio to predict the continuation. The system can generate even complex sounds such as piano melodies and human speech that match the style of the original sample and are almost indistinguishable from it.
According to Google AI, AudioLM will speed up AI training for audio generation. In the future, the model can be used to create musical accompaniment for videos.
Opinion: more than $100 billion wasted on self-driving technologies
Self-driving-vehicles expert Anthony Levandowski stated that modern systems will not reach full autonomy in the near future. He said computers remain too “stupid,” and any ordinary obstacle could confound them.
Levandowski argues that the maximum modern autonomous systems can achieve is operation in closed, controlled environments such as dump trucks in quarries.
According to the expert, his skepticism toward robo-cars emerged in 2018. However, he acknowledged Tesla’s achievements and allowed that the company might someday release a truly autonomous vehicle.
In the Netherlands, webcam surveillance of employees was deemed illegal
The Dutch court named the requirement to turn on a webcam to monitor employees during work a violation of human rights.
In August 2022, the American software developer Chetu hired a Dutch telemarketer. The company required the employee to log into the system during the workday, display their computer screen, and keep the webcam on. When he objected to monitoring, he was fired for “refusal to work” and “insubordination.”
The court ruled that “the instructions to turn on the webcam do not respect employees’ privacy.”
Chetu was ordered to pay restitution of $48,660, including reimbursement of legal costs, back wages, and unlawful dismissal. The company was also ordered to remove the non-compete clause.
Media: Apple iPhone 14 identifies roller-coaster rides as crashes
A number of iPhone 14 users complained that their device misidentified a roller-coaster ride as a crash.
According to The Wall Street Journal, devices with Crash Detection identify roller-coaster rides as emergencies and automatically alert 911. Police have repeatedly responded to calls at amusement parks because the sensors treated the ride’s turns and sudden braking as an accident.
Crash Detection is integrated into iPhone 14 smartphones and the Watch Series 8, Watch SE, and Watch Ultra. The feature uses a gyroscope, accelerometer, and an algorithm trained on numerous simulations of motor-vehicle accidents.
Google Meet adds automatic transcription for video meetings
Google Meet, the video-conferencing service, has been taught to automatically transcribe video meetings.
The feature converts speech to text and adds a transcript to Docs. Automatic transcription is available for English-language meetings, with support for French, German, Spanish and Portuguese coming in 2023.
In the future, the company promised to extend Meet’s capabilities with automatically generated meeting summaries.
Also in Meet, support for adaptive framing using AI cameras from Huddly and Logitech to keep the interlocutor in frame was added.
AI used in rendering VR worlds
An enthusiast employed a neural network to create continuously evolving virtual worlds.
The developer used the Stable Diffusion algorithm and the Visual Programming Tool TouchDesigner. He created several VR scenes whose objects were generated by the neural network from textual prompts.
The project is in an early stage of development. However, according to the engineer, he has already achieved fully functional VR scenes that render in real time at 60 FPS.
Ai-Da testifies before the British Parliament
The robot artist Ai-Da testified before the House of Lords’ Communications and Digital Technology Committee as part of an inquiry into the future of the UK art industry.
At the outset of the hearings, the committee chair Tina Stowell noted the ‘seriousness’ of the proceedings and clarified that the device presents evidence and cannot be considered a witness or hold a status equal to a human. She noted that Ai-Da’s creator, Aidan Meller, bears full responsibility for the statements of the system.
The robot provided pre-scripted answers to committee members’ questions using a language AI model. During the hearing, the device made spelling mistakes, sometimes struggled to speak and shut down.
When asked about art creation, it referred to the use of camera eyes, AI algorithms and manipulators.
During the hearings, the robot also stated that AI could be a ‘threat and an opportunity’ for artists.
In Japan, robo-cars wear ‘eyes’
Researchers from the University of Tokyo and Kyoto University proposed equipping autonomous vehicles with ‘eyes’. They argue that obvious cues for detecting specific obstacles will help reduce the risk of collisions with pedestrians.
As part of the experiment, researchers fitted a golf cart with two large movable ‘eyes’ and recruited 18 men and women aged 18 to 49. They asked participants, with the help of VR headsets, to walk in front of the robo-car, giving them three seconds to assess the situation.
The results showed that respondents were less likely to cross the road when the golf-cart’s ‘eyes’ turned away, but felt safer if the drone looked at them.
The week’s most notable AI deals
From 9 to 15 October 2022, AI startups attracted over $89.6 million. Here are the most notable deals.
- Trendsi raised $25 million to improve risk-elimination technology for retail supply chains.
- Biometric-authentication system provider IronVest raised $23 million.
- Foxglove raised $15 million to develop infrastructure for robotics.
- Levity raised $8.3 million to support a platform for automating workflows.
- Alvin raised $6 million to expand the team building a service for data-traceability and quality improvement.
- Continue AI raised $5.7 million.
- Yummy raised $3.7 million to support technology for meal planning and grocery shopping.
- SecondSight raised $3 million to push its AI-driven cyber-insurance underwriting platform to market from the inside out.
Also on ForkLog:
- In Moscow will collect data from all Russian street-surveillance cameras.
- American soldiers complained of feeling unwell while using HoloLens headsets.
- In China, an armed robodog deployment was demonstrated with the help of a drone.
- AI taught to guess passwords by heat traces on keyboards.
- The Joe Rogan deepfake conducted an interview with a fake Steve Jobs.
- Scientists created a lightweight exoskeleton with artificial intelligence.
Subscribe to ForkLog AI on Telegram: ForkLog AI — all AI news!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!