tiprankstipranks
Trending News
More News >
Advertisement
Advertisement

OpenAI Urges U.S. to Rethink AI Export Policies Amid Concerns Over Chinese Lab

OpenAI Urges U.S. to Rethink AI Export Policies Amid Concerns Over Chinese Lab

New updates have been reported about OpenAI (PC:OPAIQ)

Elevate Your Investing Strategy:

  • Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.

OpenAI has submitted a policy proposal to the U.S. government, urging a reevaluation of AI export rules in light of concerns regarding Chinese AI lab DeepSeek. OpenAI describes DeepSeek as ‘state-subsidized’ and ‘state-controlled,’ suggesting that the U.S. consider banning models from DeepSeek and similar Chinese government-supported entities. The proposal, part of the Trump administration’s ‘AI Action Plan,’ highlights the potential security risks associated with DeepSeek’s models, such as the R1 ‘reasoning’ model, due to Chinese legal requirements for data compliance. OpenAI warns that using ‘PRC-produced’ models in countries classified as ‘Tier 1’ under the Biden administration’s export rules could pose privacy and security threats, including intellectual property theft.

OpenAI’s stance represents an escalation in its campaign against DeepSeek, which it previously accused of improperly using OpenAI’s model knowledge. Despite the absence of a direct link between DeepSeek and the Chinese government, the PRC’s growing interest in the lab is evident, with DeepSeek’s founder recently meeting Chinese leader Xi Jinping. OpenAI spokesperson Liz Bourgeois clarified that the company does not advocate for outright restrictions on using models like DeepSeek’s. Instead, OpenAI proposes that U.S. export rules be adjusted to allow more countries access to U.S. computing resources, provided their data centers do not rely on PRC technology posing security risks. This nuanced position aims to promote broader access to AI technologies while safeguarding against potential security vulnerabilities.

Disclaimer & DisclosureReport an Issue

1