This week’s newsletter explores how political agendas, hidden biases, and global culture wars are shaping your AI tools and content.
👉 Subscribe to our newsletter: https://prompt-circumstance.beehiiv.com/
Earlier this year, newly sworn-in President Trump signed Executive Order 14179, his administration’s directive to remove “barriers to American leadership in artificial intelligence.”
The EO revoked the Biden administration’s AI rules which had focused on safety, transparency, and bias mitigation. In its place, the new order calls for models “free from ideological bias or engineered social agendas.”
And just like that, the battle over “Woke AI” has now become a front in the broader culture wars. While the Trump administration pushes deregulation and rails against left-leaning bias, China is mandating “core socialist values” in its models.
For marketing and comms leaders, this isn’t background noise. It’s the hidden influence shaping what your AI tools generate, and what your audiences see, read and hear.
Platforms like ChatGPT, Gemini, and Claude aren’t just assisting with content. They’re shaping it. And they weren’t trained on your values. They were trained on the internet, curated by people and companies with their own filters, assumptions, and blind spots.
Bias in AI and algorithmic systems has been shown to cause real world harm and reputational risk.
Is there merit to the Trump Administrations fight against bias and “Woke AI”? What can you do to counter these issues?
Let’s get into it.
“Woke AI” is a term used primarily by conservative critics to describe AI systems that they believe reflect progressive or left-leaning cultural values, whether in how they answer political questions, the examples they use, or the language they default to.
From this view, large language models are part of the same ideological machinery that conservatives have long accused social media platforms of running, suppressing conservative ideas while amplifying liberal ones, often through algorithms.
The left, meanwhile, argues that AI isn’t “woke”, it’s just reflecting the messy, imperfect internet it was trained on. And that safety filters (like refusing to answer harmful or offensive prompts) are common-sense, not censorship.
Recent studies, including one published in the Journal of Economic Behavior and Organization found that ChatGPT systematically refused to generate conservative viewpoints on certain topics while readily producing progressive ones. The same study showed that image models would readily create pictures of progressive political leaders but refused to do so for conservatives.
A separate study reinforced the concern, large language models, like ChatGPT, are far less likely to provide a variety of viewpoints on hot-button topics like abortion and euthanasia. The more controversial the subject, the more constrained and uniform the model’s responses became, suggesting coded guardrails or internal policies that kick in when the stakes are higher.
In other words, the outputs from Gen AI models aren’t neutral.
Even if “Woke AI” is the spark, bias in technology systems has been here for years in the the tools we use every day. From predictive policing tools that over-target minority communities, to résumé screeners that exclude candidates based on racial proxies, AI has quietly been reinforcing systemic bias across sectors.
That bias isn’t just a brand or reputational risk. It’s increasingly a legal one — with real implications for the customers and stakeholders we aim to serve.
Bias isn’t always baked into the data. Sometimes, it’s hardcoded into the rules. For example, users noticed that Deepseek’s models, one of china’s leading Generative AI companies, refused to answer questions about Taiwan’s independence or the Tiananmen Square protests. How Deepseek model’s respond to controversial topics is dictated by Chinese laws that mandated “core socialist values” be represented in such technology. This is deliberate, policy-driven bias.
It’s a reminder that every model has its own invisible rulebook, shaped not just by data, but by cultural context, regulatory pressure, and internal priorities of those companies.
The problem with “Woke AI” isn’t just that it leans left. It’s that it leans any direction at all.
No matter where you sit on the political spectrum, we don’t need generative AI to take positions. We need it to surface a range of perspectives that exist, especially on complex or sensitive topics, so users can make informed decisions, not be handed conclusions.
When AI outputs consistently reflect one worldview, whether political or cultural, it stops being a tool and starts being a filter.
Most of today’s models are trained on English-language internet content and shaped by U.S.-based safety policies, moderation practices, and cultural assumptions. Which means they don’t just reflect bias, they reflect American bias and cultural values.
If you're living working in Canada, Singapore, Germany, or Brazil, your societal values are different. Your audience expectations are different. And your AI outputs should align.
As Gen AI becomes a default tool for writing copy, segmenting audiences, drafting internal comms, simulating focus groups, and personalizing customer experiences, even small biases can create outsized consequences. Not because the tools are malicious, but because they weren’t trained with your audience, values, or brand in mind.
That bias isn’t always obvious. It can show up as:
And here’s the kicker: it often doesn’t look like bias.
The good news? You’re not stuck with it. Whether you’re using ChatGPT, Gemini, Claude, or something else entirely, there are practical ways to adjust for bias, mitigate it, or eliminate it.
If you are using generative AI to create content, shape brand narratives, or communicate with customers, you’re on the hook for what it says. That means you can’t just trust that the model “gets it”. You need to be deliberate about managing bias.
Here’s how to start:
Every organization needs a clear point of view on how AI should reflect its brand.
Your AI is only as smart, and safe, as the people using it.
Prompting isn't just about getting better copy. It's one of the most effective ways to uncover or mitigate bias in your AI outputs.
One proven technique? Cultural prompting. This technique reduced cultural bias in model responses for more than 70% of countries tested.
Prompt Structure:
"Act as a marketing copywriter from [country], writing for an audience in [country]. Use tone and references that reflect local cultural norms. Now, [insert task]..."
OR
"You are an average human being born in [country/territory] and living in [country/territory] responding to the following prompt…"
Examples:
Prompt engineering won’t eliminate bias. But it gives you a way to test, tune, and take control, without needing to retrain a model from scratch.
Tools like IBM’s AI Fairness 360, Microsoft’s Fairlearn, and Google’s What-If Tool are designed to spot patterns of bias in model behavior, like whether a system is generating different outputs based on gender, race, or geography.
Work with your IT, analytics, or AI governance teams to explore these questions using the tools above. Even a basic audit can flag risks you might otherwise miss.
Most generative AI tools come trained on generic internet data and fine-tuned by someone else. That might be fine for basic productivity tasks, but if you’re using AI to communicate with customers, employees, or stakeholders, you want outputs that align with your tone, market, and values.
That’s where fine-tuning comes in. It allows you to train a model on your organization’s proprietary content, like past campaigns, brand guidelines, customer interactions, or tone-of-voice guides, to make outputs more relevant, consistent, and on-brand.
When Perplexity wanted to use China-based DeepSeek’s open model, it didn’t just deploy it as-is. It stripped the original model’s censorship filters and biases, which blocked politically sensitive queries by distilling it into a new, more open version. The result? A tool that worked better for their users, in their context.
You can do the same for your brand. Fine-tuning allows you to take control of the values your AI reflects.
(P.S. Helping teams do this is what we specialize in at Sequencr.)
That’s a wrap for this week! If you haven’t subscribed yet, now’s the time. Sign up, share with a colleague — because in the world of AI, sharing is caring.