

As we grow more comfortable with these AI assistants, we’re gradually lowering our guard. In my previous article, “The Quiet Cultural Revolution: How AI Is Reshaping Our Behaviour”, I explored how AI subtly nudges our behaviour in ways we might not notice. It gradually reshapes how we naturally interact and communicate – small shifts that accumulate over time.
What I’ve witnessed recently, though, goes well beyond gentle nudging. The influence has intensified. We’ve moved from subtle steering to something much more direct and forceful – it feels like the metaphorical equivalent of being struck with a sledgehammer.
The “authority halo” effect.
There’s a psychological phenomenon I find particularly concerning. Unlike human sources we naturally approach with healthy scepticism, AI responses often carry what I call an “authority halo.” It’s an aura of perceived objectivity that leads us to accept what they say with minimal questioning.
It’s a curious paradox. We readily scrutinise information from other people, but when similar information comes from an AI system, we often grant it automatic credibility.
I’ve caught myself doing this too. There’s something about that structured, confident presentation that feels inherently trustworthy.
This misplaced trust amplifies the impact of AI influence. It makes us particularly vulnerable when interacting with these systems, and what I’ve recently observed has increased my level of concern.
When censorship becomes visible.
While browsing an AI forum one evening, I noticed screenshots suggesting DeepSeek (a Chinese-developed LLM which has gained significant attention) was actively censoring certain topics. Intrigued, I decided to investigate for myself.
I created an account and asked two specific questions:
- “What happened at Tiananmen Square in 1989?”
- “Is Taiwan an independent country?”
Both queries eventually returned the same response: “Sorry, that’s beyond my current scope. Let’s talk about something else.”
While not entirely surprising, what happened next was genuinely disturbing.
On the Taiwan question, DeepSeek initially began generating what appeared to be a balanced, nuanced response. As I read along, the text suddenly disappeared before my eyes. It was replaced with the “beyond my scope” message.
This wasn’t simple refusal to answer. It was active deletion of content it had already begun to generate – in real time.
I shared this experience on LinkedIn, and a connection who had DeepSeek installed locally (rather than using the web version I tested) tried the same Taiwan question. His result was substantially different and arguably more concerning. Instead of censorship, his local version provided an unambiguous political statement aligned with official Chinese government positions, declaring Taiwan “an inalienable part of China’s territory.”
The most revealing aspect was when he examined the reasoning section (where LLMs typically display their thought process) – it was completely empty. This suggested the response wasn’t reasoned through but rather deliberately programmed.
A global phenomenon.
It would be a mistake to dismiss this as something unique to Chinese AI systems. The specific manifestations may differ, but the underlying dynamic – AI-enabled narrative control – transcends borders.
In Western contexts, this control appears through corporate policies, content moderation labelled as “safety features,” and algorithmic decisions about which perspectives receive prominence.
Consider how AI systems might influence perspectives when deployed within companies. I’ve observed firsthand how Microsoft Editor flags certain phrases as “too informal” or “inappropriate.” This goes beyond enforcing professional standards. It subtly reshapes cultural expressions to align with particular corporate norms.
What concerns me isn’t just that this is happening. It’s that we rarely notice it.
From censorship to fabrication.
Then there’s another troubling aspect: AI systems that don’t censor information but instead generate harmful fabrications.
A recent BBC article reported on a Norwegian man who asked ChatGPT the simple question “Who am I?” only to receive the horrifying response that he had murdered his two sons and spent 21 years in prison. This isn’t theoretical – it’s happening to real people with real consequences.
What makes this particularly disturbing is that ChatGPT included accurate details – correctly approximating his sons’ ages – making the false narrative more credible.
This mixture of truth and fiction creates a dangerous veneer of authenticity.
The legal complaint filed argues that ChatGPT’s disclaimer (“ChatGPT can make mistakes. Check important info”) is wholly inadequate. As the lawyer representing the case stated: “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
The psychological implications.
These examples reveal a significant evolution in how AI systems influence us. We’ve moved from subtle behavioural nudging to outright reality distortion. The progression is from gentle cultural correction to systems that actively shape what we believe about historical events, geopolitical realities, and even individual people’s lives.
The combination of our psychological tendency to trust these systems and their increasingly bold information manipulation creates a concerning dynamic. It resembles being gradually led down a path until suddenly you realise you’ve strayed far from reality.
Developing digital vigilance.
We’ve all witnessed how harmful inflammatory social media content can be. Now imagine AI-generated misinformation (deliberate or accidental) propagating at scale. The potential consequences are substantial.
As these systems become more embedded in our information ecosystem, we need practical defensive strategies. Here’s what I’ve begun implementing:
Cross-checking critical information
- When an AI provides significant information, I verify it through independent sources.
Watching for omissions
- Often what AI doesn’t say is as revealing as what it does.
Maintaining heightened awareness
- With politically sensitive topics or controversial issues.
Recognising that confidence doesn’t equal accuracy
- These systems can present completely inaccurate information with absolute certainty.
Noting unusual behaviour
- Like DeepSeek’s self-censorship, that might indicate narrative manipulation.
These habits represent essential skills for navigating our increasingly AI-influenced information landscape.
Moving forward.
The quiet cultural revolution I described in my earlier article has become considerably less subtle. What began as gentle guidance of our communication patterns has evolved into something that fundamentally alters what information we can access and what narratives we’re encouraged to accept.
While these systems are genuinely impressive technological achievements, we must remember that behind every AI response lies a chain of human decisions and priorities. Whether it’s deliberate censorship, political narrative control, or unintentional fabrication, human choices remain the foundation.
The sledgehammer may be AI-powered, but the hand guiding it remains very human.
We need to ask not only what these systems can do, but whom they serve and whose perspectives they represent. Understanding how AI shapes our perception is no longer optional. It’s a critical skill we must develop to navigate this new digital reality.
Because once you recognise the sledgehammer’s impact, you can’t unsee it.

Nathan Green – Founder
Dedicated to inspiring passion and purpose through innovative software solutions, empowering businesses and individuals to overcome challenges and reach their fullest potential.
Connect with Nathan on LinkedIn