Telegram (AI) YouTube Facebook X
Ру
DeepSeek Prompts OpenAI to Bolster Corporate Security

DeepSeek Prompts OpenAI to Bolster Corporate Security

OpenAI has revamped its security system to safeguard intellectual property against corporate espionage amid concerns over theft by Chinese competitors, reports the Financial Times, citing sources.

In recent months, the company has implemented stricter measures to control confidential information and intensified employee screening. The initiative gained momentum following the release of a competing model by the Chinese AI startup DeepSeek.

OpenAI claims that the Chinese firm illicitly copied its developments using a method called “distillation”—training a neural network based on responses from another LLM.

The FT noted that the incident compelled Sam Altman’s startup to act “much more stringently.” The company is “aggressively” expanding its team of specialists, including cybersecurity units.

The firm began implementing strict policies in its San Francisco offices last summer to limit employee access to critical information.

The company now stores a significant portion of its patented technologies in isolated environments—computers are disconnected from the internet and not linked to other networks. Additionally, OpenAI offices employ biometric checks: employees can access certain areas only after fingerprint scans.

OpenAI has enhanced the physical security of its data centers. The company has joined many in Silicon Valley in tightening employee and candidate screenings amid the heightened threat of espionage from China.

Back in June 2024, retired US Army General Paul Nakasone joined the board of directors and the Security and Protection Committee of OpenAI.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK