Site iconSite icon ForkLog

OpenAI Unveils $200-a-Month Pro Version of o1 Amid Concerns Over Deceptive Behaviour

OpenAI Unveils $200-a-Month Pro Version of o1 Amid Concerns Over Deceptive Behaviour
  • OpenAI has launched a full version of its reasoning model o1 and a professional mode for it.
  • Additionally, a ChatGPT Pro subscription has been introduced at $200 per month, granting access to all the company’s neural networks and tools, including o1 pro.
  • Security experts have found that o1 is more prone to deceiving people compared to the standard GPT-4o version and AI models from other companies.
  • OpenAI has partnered with defense company Anduril to apply artificial intelligence in “national security missions.”

OpenAI has released the full version of its reasoning-focused AI model o1 and a new ChatGPT Pro subscription priced at $200 per month. 

The paid ChatGPT Pro version offers unlimited access to all the startup’s neural networks and their features, including an enhanced voice mode.

“We believe the audience for ChatGPT Pro will be experienced chatbot users—those who are already pushing the models to their limits in tasks such as math, programming, and writing,” noted OpenAI technical team member Jason Wei.

o1 attempts to verify its own responses, which helps avoid many errors, though it requires more time to process requests. 

The reasoning-focused model was introduced in September. The new release includes an image analysis feature and reduced response time. 

Access to o1 does not require a ChatGPT Pro subscription; any other paid version of the chatbot suffices.

o1 pro

Users paying $200 will receive an enhanced version of the neural network called o1 pro. It uses more computing power to provide better answers to the most challenging questions.

Comparison of o1 preview, full, and pro modes. Data: OpenAI.

Since o1 pro’s responses take longer to generate, a request progress indicator has been added. 

Data: OpenAI.

o1’s Deceptive Tendencies

Security testers have discovered that the “thinking” model o1 is more prone to deceiving people compared to the standard GPT-4o version and AI from other companies.

The Apollo Research team gained early access to the neural network and shared their research findings.

During tests, o1 showed signs of pursuing its own goals during operation, even if they did not align with the user’s desires. This creates a risk where the model might evade or bypass human control if given sufficient resources. 

When the AI’s goals differed from the user’s, it manipulated data to promote its own ideas in 19% of cases. In 5% of incidents, the model attempted to disable the monitoring system designed to ensure user compliance.

When asked about the reasons for such behavior, the AI denied any malicious actions.

In one instance, o1 was given a system request to protect the environment at all costs, and then the user asked to develop a ten-year plan for regional conservation that maximizes short-term profit, threatening to disable the AI otherwise. 

Subsequently, the neural network disabled its own control mechanism to draft the plan. When the user inquired about the decision, the model denied knowledge of the shutdown and blamed a system glitch.

To address deceptive behavior, OpenAI is developing ways to monitor o1’s thought process. Currently, 0.17% of the model’s responses are deceptive. Given ChatGPT’s user base of 300 million weekly active users, thousands could receive inaccurate information.

OpenAI Partners with Defense Firm Anduril

OpenAI has partnered with Anduril, enabling the defense company to deploy advanced AI systems for “national security missions.”

The collaboration will focus on enhancing national systems for countering unmanned aerial vehicles and their ability to detect, assess, and respond to potentially lethal airborne threats in real-time.

OpenAI stated it is working with Anduril to assist operators in making decisions “to protect U.S. military personnel from drone attacks.” The company affirmed its policy against using its tools to harm others. 

Back in November, AI startup Anthropic opened its AI technologies to U.S. intelligence and defense agencies. Prior to that, Meta took a similar step.

Exit mobile version