Site iconSite icon ForkLog

Nvidia unveils tool for building safe AI models

Nvidia unveils tool for building safe AI models

Nvidia unveiled the NeMo Guardrails tool, designed to help developers ensure accuracy, relevance and safety in generative AI.

NeMo Guardrails allows developers to apply three distinct types of constraints to their language models (LLMs). In particular, companies can set “subject-matter barriers” that prevent their applications from processing irrelevant queries.

For example, a customer-support chatbot built with its software would refuse to answer a weather question.

Companies can also set constraints so that LLMs receive accurate information and connect to safe applications.

According to Nvidia, NeMo Guardrails works with all language models, including ChatGPT. The company also states that almost any software developer can use the tool.

“Nvidia has made NeMo Guardrails open source to contribute to the immense energy of the developer community and the safety of AI operation,” the blog says.

The company is confident that the tool will help developers bring their AI services in line with security requirements.

NeMo Guardrails is open source. Nvidia noted that it will also work with all tools already used by enterprise developers.

The tool is integrated into the existing NeMo platform for building generative AI models. Business customers can also access it via the AI Enterprise or AI Foundations platforms.

In April, Nvidia showed algorithm for generating realistic videos from text descriptions.

In September 2022 the company introduced new chips and AI services. Among them is the NeMo LLM platform for building language models.

Exit mobile version