OpenAI’s Changing Stance on AI Regulation Raises Privacy Concerns Amid Expansion

OpenAI’s Changing Stance on AI Regulation Raises Privacy Concerns Amid Expansion
OpenAI, once a supporter of AI regulation, has recently opposed a proposed Californian law aimed at establishing safety standards for large AI models. This shift comes as the company expands its data acquisition efforts through media partnerships, biometric technology, and health data, raising concerns about privacy and ethical implications.
Table of Contents
1OpenAI’s Changing Stance on AI Regulation Raises Privacy Concerns Amid Expansion
Media Partnerships
Video, Biometrics, and Health
Sam Altman’s Controversial Side Project
Why This Matters
Is Safety Being Compromised?

Last month, OpenAI took a surprising stand against a forthcoming California law designed to establish baseline safety standards for developers of large AI models. This marks a significant shift for the company, whose CEO, Sam Altman, had previously expressed support for AI regulation.

Since its rise to fame in 2022 with the launch of ChatGPT, OpenAI has transitioned from a nonprofit organization to a major player in the AI space, now valued at up to US$150 billion. It continues to push the boundaries of AI development, recently unveiling a new "reasoning" model intended to tackle more complex tasks.

However, the company’s recent moves indicate a growing interest in acquiring a broader range of data. This extends beyond the usual text and images used to train AI, potentially encompassing more personal and sensitive data related to online behavior, interactions, and even health.

While there's no evidence that OpenAI plans to consolidate these data streams, the possibility raises serious concerns about privacy and the ethical consequences of centralizing such vast amounts of information.

Media Partnerships

In 2023, OpenAI secured multiple partnerships with major media organizations, including Time magazine, the Financial Times, Axel Springer, Le Monde, Prisa Media, and Condé Nast, which owns titles like Vogue and The New Yorker. These collaborations grant OpenAI access to a wealth of content, and its products could potentially be used to analyze user behavior—such as reading habits, preferences, and engagement—across various platforms.

Should OpenAI gain access to this data, the company could develop highly detailed user profiles, providing deeper insights into how individuals consume content. This could pave the way for more advanced tracking and user profiling.

Video, Biometrics, and Health

OpenAI is also venturing into the biometric space. It has invested in webcam startup Opal, with plans to integrate AI into the cameras. AI-powered webcams could collect video footage, translating it into biometric data such as facial expressions and inferred psychological states.

In another notable move, OpenAI and Thrive Global recently launched Thrive AI Health, which aims to "hyper-personalize" health-related behavior change using AI. While the initiative promises strong privacy and security measures, it remains unclear how these will be implemented. Previous collaborations between tech companies and healthcare, such as Microsoft’s partnership with Providence Health and Google DeepMind’s controversial use of NHS patient data, have raised serious privacy concerns.

Sam Altman’s Controversial Side Project

Altman is also involved in other data-driven ventures, including WorldCoin, a cryptocurrency project he cofounded. WorldCoin seeks to create a global financial network and ID system based on biometric identification, specifically iris scans. The project has already scanned the irises of over 6.5 million people in nearly 40 countries, but more than a dozen jurisdictions have either suspended operations or scrutinized its data practices.

Authorities in Bavaria, Germany, are currently evaluating whether WorldCoin’s data collection methods comply with European privacy regulations. A negative ruling could see the project barred from operating in Europe, raising further concerns about the handling of sensitive biometric data.

Why This Matters

The current generation of AI models, including OpenAI’s GPT-4, has largely been trained on publicly available data. But as AI systems evolve, they will require more data, which is becoming increasingly difficult to obtain. OpenAI has stated its ambition to train AI to understand diverse subject matters, industries, and cultures, which will demand extensive datasets.

In this context, OpenAI’s growing interest in media partnerships, biometric data, and health information paints a concerning picture. By gaining access to vast and diverse data sources, OpenAI could accelerate the development of its AI models, but at the potential cost of user privacy.

The risks are multifaceted. Large-scale data collection is inherently vulnerable to breaches and misuse, as evidenced by major incidents like the Medisecure data breach, which exposed the personal and medical information of nearly half of Australia’s population. Consolidating such vast amounts of data also raises fears of widespread surveillance and profiling. While there is no indication that OpenAI currently plans to engage in these practices, its past privacy policies have not been flawless, and tech companies in general have a questionable history when it comes to data handling.

The potential for OpenAI to control vast amounts of personal data could allow it to exert significant influence over both individual users and broader societal trends.

Is Safety Being Compromised?

OpenAI’s recent opposition to the California AI safety bill only deepens these concerns. The move signals a potential shift in priorities, as the company increasingly focuses on rapid AI deployment and commercialization. In November 2023, Altman was briefly ousted as CEO, reportedly due to internal disagreements over the company’s strategic direction. His quick reinstatement and subsequent leadership shake-up suggest that OpenAI’s board now backs his aggressive push for AI growth, even if it means sidelining safety measures.

This recent stance against regulation may be more than just a policy disagreement—it could represent a broader trend toward prioritizing expansion and market dominance over the safety and privacy concerns that accompany large-scale AI development.

Despite requests, OpenAI did not respond to The Conversation’s request for comment by the deadline.