The Power Politics of AI: Corporate Battles and National Rivalries

February 18, 2025
|
By:

AI Generated Summary:

In this week's newsletter, Prompt & Circumstance, we break down the power struggles in AI – from Musk’s takeover bid for OpenAI to the growing AI nationalism reshaping policy and infrastructure.

  • 💰 𝗠𝘂𝘀𝗸 𝘃𝘀. 𝗢𝗽𝗲𝗻𝗔𝗜 – The $97.4B battle for AI dominance – Musk’s bid to buy OpenAI could derail its $40B capital raise, delaying GPT-4.5 and GPT-5.
  • 🏛️ 𝗔𝗜 𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹𝗶𝘀𝗺 & 𝘁𝗵𝗲 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝗔𝗜 𝗖𝗼𝗹𝗱 𝗪𝗮𝗿 – At the AI Action Summit, VP JD Vance pushes an “America First” AI agenda, targeting China and the EU’s regulatory stance.
  • 🌍 𝗠𝗲𝘁𝗮’𝘀 $𝟭𝟬𝗕 𝗔𝗜 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗽𝗹𝗮𝘆 – A massive 50,000-km subsea cable ensures AI dominance with faster data pipelines and solidifies India’s strategic role.
  • ⚖️ 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗰𝗵𝗮𝗼𝘀 & 𝗔𝗜 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝘀𝗵𝗶𝗳𝘁𝘀 – Nations pivot from ethics to control, forcing businesses to adapt AI strategies to regional policies.

📩𝗦𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲: https://www.sequencr.ai/promptandcircumstance

Welcome to Sequencr AI’s newsletter – Prompt & Circumstance – your weekly deep dive into the evolving world of Generative AI and its impact on marketing and communications.

If you like this newsletter, forward it to a friend!

The Power Politics of AI: Corporate Battles and National Rivalries

This week, AI isn’t just a tech story – it’s a political one. The AI Action Summit in Paris laid bare deepening geo-political divisions over AI governance and a speech by Vice-President JD Vance signals a further rise in "Generative AI nationalism".

Meanwhile, Elon Musk’s $97.4 billion bid to take over OpenAI has escalated his long-standing feud with Sam Altman, putting OpenAI’s transition from a nonprofit to a for-profit powerhouse into question.

What does this mean for you? Let’s get into it.

Musk vs. OpenAI – The $97.4 Billion Power Struggle That Could Reshape AI Competition

In last week’s newsletter, we covered OpenAI’s plans to raise $40 billion at a $300 billion valuation, a move that would make it the world’s second most valuable startup – trailing only SpaceX. The funding is expected to fuel Stargate, a massive AI infrastructure project that the company unveiled alongside President Trump at the White House in January.

However, this ambitious raise hinges on a significant transformation – OpenAI's shift from its original nonprofit status to a for-profit entity – an ongoing and complex saga.

Originally founded as a nonprofit in 2015 to ensure artificial intelligence benefited all of humanity safely and ethically, OpenAI restructured in 2019 because its nonprofit model couldn’t attract the level of capital needed to develop cutting-edge AI. To fix this, it created a for-profit subsidiary (OpenAI LP) wrapped inside its nonprofit parent (OpenAI Inc.) – trust me; it’s confusing for everyone. In doing so, OpenAI introduced what it called a "capped-profit" model. Investors could earn returns, but profits were capped at 100 times their initial investment.

The problem – 100X is not enough

This cap, combined with OpenAI’s nonprofit oversight, is now a major barrier to raising money at the scale it needs. Investors looking to pour billions into AI want uncapped returns, long-term equity, and decision-making power – something OpenAI’s current structure doesn’t fully allow.

Musk Throws a Spanner at Altman

Musk’s bid directly challenges OpenAI’s efforts to complete this transition. To make the shift, OpenAI's for-profit arm must purchase the nonprofit’s assets — a deal rumored to be valued at $40 billion.

Musk’s offer doubles that valuation, forcing the nonprofit board to consider whether OpenAI’s assets are being undervalued and potentially delaying the $40 billion capital raise. The timing of Musk’s bid isn’t accidental – it is designed to disrupt the company and creates uncertainty that could make investors hesitate.

On Friday, OpenAI’s board formally rejected Musk’s offer. But the battle isn’t over – legal challenges are likely, with some arguing that OpenAI’s nonprofit board failed its fiduciary duty by not fully considering Musk’s bid. This comes on top of Musk’s existing lawsuit against OpenAI , currently making its way through the courts, in which he claims the company has strayed from its original mission of developing AI for the benefit of humanity.

Big Plans on Hold?

Make no mistake – if OpenAI fails to transition to a for-profit model, Musk gains a major advantage. A delay in the capital infusion could slow OpenAI and could delay key product launches, giving xAI and other competitors an opening to accelerate their efforts.

At the end of last week – potentially in a bid to shore up his leadership amid Musk’s takeover attempt – Sam Altman detailed OpenAI’s roadmap, announcing plans to launch GPT-4.5 and GPT-5 in the coming months.

The announcement generated significant interest and discussion in the AI community. While details remain vague, Altman’s post hints that GPT-5 will focus on consolidating OpenAI’s various models into a more unified system, improving platform efficiency rather than introducing a radically more powerful model.

The Fight for AI Control — And How Transparent Should It Be?

Musk’s bid — and Altman’s push to transition OpenAI into a full for-profit model — raises a fundamental question for marketing and communications leaders.

As AI becomes deeply embedded in content creation, media relations, and crisis communications, who controls AI models, how they are trained and maintained, and what data they use will shape trust in brands and organizations. With AI adoption and knowledge growing, people are demanding more transparency into AI-driven outputs and decisions. Companies are also looking to exert greater control over what models produce by fine-tuning them to their needs and removing or addressing ethical issues and bias. This growing demand for openness and accountability is fueling a surge in the popularity of open-source AI.

AI Nationalism and The Emerging AI Cold War

The geo-politics of AI were on full display at last week’s AI Action Summit in Paris, where tensions between the U.S., Europe, and China were on full display.

In his inaugural international policy speech, Vice President JD Vance outlined an assertive “America First” AI strategy, reinforcing the Trump administration’s commitment to maintaining U.S. leadership in the technology. While calling for global collaboration, Vance drew clear ideological lines, framing the future of AI as a defining choice for nations and businesses. He contrasted the U.S. model – rooted in innovation, individual freedom, and democratic values – with that of authoritarian regimes, particularly China, where AI is a tool for censorship, surveillance, and control.

Vance issued a strong warning against excessive regulation, a shot at the EU’s approach to AI governance. He warned against heavy-handed policies that would stifle American AI development and companies, a veiled threat amid ongoing tariff discussions. At the same time, Vance reinforced the administration’s pro-worker stance on AI, asserting that the technology should be developed not to replace jobs, but to enhance productivity and create new opportunities for American workers.

AI Regulation Takes a Backseat as Nations Race for Leadership

The speech marked a sharp departure from the cautious, risk-focused approach that governments have taken toward AI since its emergence in 2022. At the AI Summits in 2023 and 2024, global leaders pledged to prioritize AI safety and risk mitigation. Over the past year, as AI competition has intensified – particularly with Chinese companies making rapid advancements in model development – this cautious stance has given way to a more aggressive pursuit of AI leadership.

Even the EU, once a champion of strict AI regulation, has softened its stance in recent months. Despite the early success of companies like Mistral, European AI firms have struggled to keep pace with their American and Chinese counterparts. Recognizing the urgency, the EU announced billions in new economic support for the sector last week, signaling a strategic pivot toward bolstering its AI industry.

AI Nationalism Isn’t Just About Models — It’s Also About Data Control and Access

As AI competition intensifies, it’s not just about who builds the best models — it’s about who controls the infrastructure that powers them. In recent months, we’ve seen lots of talk about tech companies investing in power infrastructure. Last week, Meta announced it was building a $10 billion private subsea fiber-optic cable spanning 50,000 kilometers.

Meta's ambitious project will connect the U.S. East Coast to India via South Africa, and return to the U.S. West Coast through Australia. By owning its own subsea cable, Meta ensures faster, more reliable connections for its platforms like Facebook, Instagram, and WhatsApp. More critically, AI-driven products demand massive amounts of real-time data processing – whether it’s AI-generated content, live translations, or personalized ad targeting. A dedicated pipeline guarantees AI models receive data without slowdowns, minimizing latency issues for their AI-powered experiences.

The subsea cable reinforces India’s growing strategic importance to Meta. Beyond its massive user base — Meta’s largest market for Facebook, Instagram, and WhatsApp — India is rapidly emerging as a key hub for AI infrastructure and talent. By securing a dedicated high-speed data pipeline to the country, Meta strengthens its long-term foothold in one of the world’s fastest-growing digital markets.

Policy and Regulatory Complexity Increase

The Trump administration’s aggressive push for AI leadership, the emergence of an AI Cold War, and the rapid scaling of AI adoption are reshaping the regulatory landscape for marketing and communications teams. AI governance is no longer just about content moderation and IP concerns — it will require navigating AI-driven censorship laws, geopolitical risks, and compliance frameworks that vary across regions.

Decisions about which AI models to use for content creation, digital experiences, and work product will increasingly depend on how different markets align with U.S., Chinese, or EU AI ecosystems. This will lead to greater fragmentation, forcing businesses to adapt their AI strategies to regional policies and technological restrictions.

For public affairs leaders, AI is now a policy battleground. Companies must actively engage with regulators, policymakers, and industry groups to shape AI governance in a way that safeguards business interests.

That’s all for this week, folks!

Subscribe and stay tuned for next week's closer look at all the recent AI model news and product releases!

Subscribe to our newsletter for exclusive Generative AI insights, strategies, tools, and case studies that will transform your marketing and communications.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Subscribing confirms your agreement to our privacy policy.