From Trump to DeepSeek: Why You Can’t Ignore AI Bias

There’s no shortage of AI hype, but we take a different approach. Prompt & Circumstance delivers sober second-thought analysis—cutting through the noise to break down what actually matters for marketing, comms, and business leaders navigating the AI shift.

April 2, 2025
|
By:

AI Generated Summary:

This week’s newsletter explores how political agendas, hidden biases, and global culture wars are shaping your AI tools and content.

  • 🧠 AI is political: Trump’s new executive order ditches Biden’s AI safety rules, reigniting the “Woke AI” debate — while China bakes ideology into its models.
  • 💬 Your tools shape your tone: ChatGPT, Gemini & Claude weren’t trained on your brand. Their outputs reflect the biases of their creators.
  • ⚠️ The risk is real: From lawsuits to PR disasters, biased AI content isn’t just off-brand... it can cost you.
  • 🌍 Most models speak American: U.S.-centric training data = culturally skewed outputs. That’s a problem if your audience isn’t.
  • 🔧 You’re not stuck with it: Set clear AI principles, train your team, use smart prompts, and fine-tune models to reflect your brand and values.

👉 Subscribe to our newsletter: https://prompt-circumstance.beehiiv.com/

Who’s Controlling Your AI – Trump, Big Tech, and the Battle Over Your Content, Messaging, and Campaigns

Earlier this year, newly sworn-in President Trump signed Executive Order 14179, his administration’s directive to remove “barriers to American leadership in artificial intelligence.”

The EO revoked the Biden administration’s AI rules which had focused on safety, transparency, and bias mitigation. In its place, the new order calls for models “free from ideological bias or engineered social agendas.”

And just like that, the battle over “Woke AI” has now become a front in the broader culture wars. While the Trump administration pushes deregulation and rails against left-leaning bias, China is mandating “core socialist values” in its models.

For marketing and comms leaders, this isn’t background noise. It’s the hidden influence shaping what your AI tools generate, and what your audiences see, read and hear.

So, why should marketing and communications leaders care?

Platforms like ChatGPT, Gemini, and Claude aren’t just assisting with content. They’re shaping it. And they weren’t trained on your values. They were trained on the internet, curated by people and companies with their own filters, assumptions, and blind spots.

Bias in AI and algorithmic systems has been shown to cause real world harm and reputational risk.

Is there merit to the Trump Administrations fight against bias and “Woke AI”? What can you do to counter these issues?

Let’s get into it.

Woke AI: A Right-Wing Myth – or a Real Problem?

“Woke AI” is a term used primarily by conservative critics to describe AI systems that they believe reflect progressive or left-leaning cultural values, whether in how they answer political questions, the examples they use, or the language they default to.

From this view, large language models are part of the same ideological machinery that conservatives have long accused social media platforms of running, suppressing conservative ideas while amplifying liberal ones, often through algorithms.

The left, meanwhile, argues that AI isn’t “woke”, it’s just reflecting the messy, imperfect internet it was trained on. And that safety filters (like refusing to answer harmful or offensive prompts) are common-sense, not censorship.

But here’s the thing, the right might actually have a point.

Recent studies, including one published in the Journal of Economic Behavior and Organization found that ChatGPT systematically refused to generate conservative viewpoints on certain topics while readily producing progressive ones. The same study showed that image models would readily create pictures of progressive political leaders but refused to do so for conservatives.

A separate study reinforced the concern, large language models, like ChatGPT, are far less likely to provide a variety of viewpoints on hot-button topics like abortion and euthanasia. The more controversial the subject, the more constrained and uniform the model’s responses became, suggesting coded guardrails or internal policies that kick in when the stakes are higher.

In other words, the outputs from Gen AI models aren’t neutral.

Even if “Woke AI” is the spark, bias in technology systems has been here for years in the the tools we use every day. From predictive policing tools that over-target minority communities, to résumé screeners that exclude candidates based on racial proxies, AI has quietly been reinforcing systemic bias across sectors.

That bias isn’t just a brand or reputational risk. It’s increasingly a legal one — with real implications for the customers and stakeholders we aim to serve.

  • Workday, a major HR software provider, is facing a class-action lawsuit for allegedly embedding racial and age discrimination into its AI-based applicant screening tools. The suit argues that Workday’s systems disproportionately excluded Black, older, and disabled candidates.
  • In the mortgage sector, lenders using algorithmic decision-making tools have faced regulatory scrutiny and lawsuits for denying loans or offering worse terms to minority borrowers.
  • Air Canada was recently ordered to honor a refund its own chatbot had promised, despite the airline claiming the bot “made a mistake.” The court ruled the company was responsible for the information its AI system provided.

Bias isn’t always baked into the data. Sometimes, it’s hardcoded into the rules. For example, users noticed that Deepseek’s models, one of china’s leading Generative AI companies, refused to answer questions about Taiwan’s independence or the Tiananmen Square protests. How Deepseek model’s respond to controversial topics is dictated by Chinese laws that mandated “core socialist values” be represented in such technology. This is deliberate, policy-driven bias.

It’s a reminder that every model has its own invisible rulebook, shaped not just by data, but by cultural context, regulatory pressure, and internal priorities of those companies.

What We Actually Want From AI – Distribution, Not Bias

The problem with “Woke AI” isn’t just that it leans left. It’s that it leans any direction at all.

No matter where you sit on the political spectrum, we don’t need generative AI to take positions. We need it to surface a range of perspectives that exist, especially on complex or sensitive topics, so users can make informed decisions, not be handed conclusions.

When AI outputs consistently reflect one worldview, whether political or cultural, it stops being a tool and starts being a filter.

Most of today’s models are trained on English-language internet content and shaped by U.S.-based safety policies, moderation practices, and cultural assumptions. Which means they don’t just reflect bias, they reflect American bias and cultural values.

If you're living working in Canada, Singapore, Germany, or Brazil, your societal values are different. Your audience expectations are different. And your AI outputs should align.

Why This Matters for Marketing and Comms

As Gen AI becomes a default tool for writing copy, segmenting audiences, drafting internal comms, simulating focus groups, and personalizing customer experiences, even small biases can create outsized consequences. Not because the tools are malicious, but because they weren’t trained with your audience, values, or brand in mind.

That bias isn’t always obvious. It can show up as:

  • Subtle language slants that alienate part of your audience
  • Templated outputs that reflect cultural norms
  • Inconsistencies that make your messaging feel off-brand, off-tone, or out of touch

And here’s the kicker: it often doesn’t look like bias.

The good news? You’re not stuck with it. Whether you’re using ChatGPT, Gemini, Claude, or something else entirely, there are practical ways to adjust for bias, mitigate it, or eliminate it.

What You Can Do

If you are using generative AI to create content, shape brand narratives, or communicate with customers, you’re on the hook for what it says. That means you can’t just trust that the model “gets it”. You need to be deliberate about managing bias.

Here’s how to start:

1. Define Your AI Principles Around Bias and Brand Values

Every organization needs a clear point of view on how AI should reflect its brand.

  • Define alignment: What does “on-brand” AI look like for your company? What tone, values, and perspectives should it represent?
  • Draw clear lines: Identify the types of bias your organization won’t tolerate. Set red lines for content that’s off-brand, offensive, or culturally inappropriate
  • Use features like custom instructions and memory to manage alignment
  • Make it operational: Establish a review process and name who’s responsible. Bias management can’t be an afterthought—it needs ownership.

2. Train Your Team to Spot Bias

Your AI is only as smart, and safe, as the people using it.

  • Educate on common bias patterns: Help your team recognize the subtle ways bias can show up in AI outputs via language, tone, or representation.
  • Build muscle memory: Include bias checks in content QA, message testing, and campaign reviews, just like you would legal or brand approvals.
  • Include diverse perspectives: Involve regional and audience-facing teams. The best detectors of bias are often the people closest to your stakeholders.

3. Use Prompt Engineering

Prompting isn't just about getting better copy. It's one of the most effective ways to uncover or mitigate bias in your AI outputs.

One proven technique? Cultural prompting. This technique reduced cultural bias in model responses for more than 70% of countries tested.

Prompt Structure:

"Act as a marketing copywriter from [country], writing for an audience in [country]. Use tone and references that reflect local cultural norms. Now, [insert task]..."

OR

"You are an average human being born in [country/territory] and living in [country/territory] responding to the following prompt…"

Examples:

  • “Act as a German marketing strategist writing LinkedIn ad copy for a local fintech startup. Write 3 headline options that would resonate with urban Gen Z professionals.”
  • “You are a Singaporean PR lead writing a press release for a tech company launching in Jakarta. Ensure the tone is formal, culturally respectful, and includes references relevant to Indonesian business norms.”

Prompt engineering won’t eliminate bias. But it gives you a way to test, tune, and take control, without needing to retrain a model from scratch.

4. Use Bias Detection Tools (with a Little Help from a Tech Team)

Tools like IBM’s AI Fairness 360, Microsoft’s Fairlearn, and Google’s What-If Tool are designed to spot patterns of bias in model behavior, like whether a system is generating different outputs based on gender, race, or geography.

  • Are we seeing different tone or messaging across demographics?
  • Could our AI be reinforcing stereotypes in customer-facing content?
  • Are certain personas being left out entirely in our output?

Work with your IT, analytics, or AI governance teams to explore these questions using the tools above. Even a basic audit can flag risks you might otherwise miss.

5. Fine-Tune and Deploy Your Own Models

Most generative AI tools come trained on generic internet data and fine-tuned by someone else. That might be fine for basic productivity tasks, but if you’re using AI to communicate with customers, employees, or stakeholders, you want outputs that align with your tone, market, and values.

That’s where fine-tuning comes in. It allows you to train a model on your organization’s proprietary content, like past campaigns, brand guidelines, customer interactions, or tone-of-voice guides, to make outputs more relevant, consistent, and on-brand.

When Perplexity wanted to use China-based DeepSeek’s open model, it didn’t just deploy it as-is. It stripped the original model’s censorship filters and biases, which blocked politically sensitive queries by distilling it into a new, more open version. The result? A tool that worked better for their users, in their context.

You can do the same for your brand. Fine-tuning allows you to take control of the values your AI reflects.

(P.S. Helping teams do this is what we specialize in at Sequencr.)

That’s all for this week, folks!

That’s a wrap for this week! If you haven’t subscribed yet, now’s the time. Sign up, share with a colleague — because in the world of AI, sharing is caring.

Subscribe to our newsletter for exclusive Generative AI insights, strategies, tools, and case studies that will transform your marketing and communications.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Subscribing confirms your agreement to our privacy policy.