
Microsoft unveils open-source tool to test AI security
Microsoft released the open-source Counterfit tool, designed to help developers test the security of AI systems.
Meet Counterfit 👋, the new automation framework for #NetworkSecurity testing machine learning models. Discover how this new open-source project enhances AI systems: https://t.co/kqCltXBRi6
— Microsoft Security (@msftsecurity) May 3, 2021
According to the developers, the tool is designed to scan vulnerabilities in AI systems and to generate attack logs on the target model. With it, data professionals use telemetry to further improve security.
Microsoft noted that Counterfit is agnostic to environment, model and data. It can be used locally, at the edge, or in the cloud to test any AI model that relies on input data in nearly any form, including text and images.
The company originally created the tool for its own needs. It consisted of a set of scripts for attacking AI models. Later Counterfit evolved into an automation tool capable of attacking several AI models at scale.
“This tool arose from our own need to assess vulnerabilities in AI systems at Microsoft to protect services in accordance with the principles of Responsible AI and the Responsible AI in Development initiative (RAISE),” the company said in a blog post.
Microsoft has tested Counterfit with several clients, including aerospace giant Airbus.
In April, Microsoft rolled out Intel’s threat-detection technology for detecting hidden miners in Defender for Endpoint.
In March at Ignite 2021, Microsoft introduced a number of new tools and services for the Azure cloud platform for developers. Among them: a platform for running AI models on edge devices, semantic search in Azure Cognitive Search, and Azure Arc services.
Follow ForkLog’s news on Telegram: ForkLog AI — all the news from the world of AI!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!