The quiet cultural revolution: How AI Is reshaping our behaviour

An AI generated US flag against a scenic backdrop of hills
We’ve always watched cultures blend and evolve through exposure to different ways of thinking and behaving. But something fundamentally different is happening now – and we need to talk about it.
An AI generated US flag against a scenic backdrop of hills

The evolution of cultural influence.

There’s a certain irony in an Englishman raising concerns about cultural influence. After all, Britain’s historical role in spreading language and customs across the globe makes this conversation rather complex. But perhaps that’s precisely why we should be particularly attuned to how cultural influence operates – we’ve seen it from both sides now. Today, we’re experiencing a fundamentally different kind of cultural evolution: not through physical presence or historical forces, but through the invisible tendrils of technology and AI. Where once cultural influence moved at the speed of ships, now it travels at the speed of light through algorithms and digital interactions.

To illustrate the point of cultural influence, we’ll take a look at how American culture is influencing Britain.

It started slowly – American phrases and cultural touchstones gradually seeping into our daily lives through TV, films, and music. Now our children casually use American slang in playgrounds, while our boardrooms echo with phrases like ‘knocking it out of the park’ and ‘curveball’ – terms that would have seemed oddly foreign just a generation ago. Even ‘let’s keep our foot on the gas’ has somehow overtaken our more natural ‘foot on the accelerator.’ But this organic evolution through exposure and choice is fundamentally different from what we’re seeing with AI.

This was passive learning-we observed, we absorbed, we gradually adopted. Then came social media.

From passive to interactive.

Platforms like YouTube, LinkedIn, and X didn’t just connect us globally – their algorithms actively shaped what we saw and how we interacted. Nine of the ten most popular websites in the UK are American, with American software accounting for more than 95% of the tools used to access them. This created a new dynamic: content that performed well got promoted, creating pressure not just to communicate differently but to act differently.

This power of digital platforms to shape behaviour and cultural norms isn’t just academic speculation. The ongoing controversy around TikTok in America highlights this reality – in 2024, the U.S. Congress passed legislation requiring ByteDance to sell TikTok or face a ban, precisely because of concerns about its potential for cultural influence and its ability to shape opinions and behaviours through algorithmic content control.

When a government moves to ban one of its most popular apps, used by 170 million Americans, over concerns about cultural engineering, it rather suggests we’re not being paranoid about digital influence.

Yet while social media largely exposed us to new norms, AI now takes that influence one step further, shifting from passive suggestion to active, real-time correction.

More concerning is how this has led us to view our own societal challenges through an American lens. We’re increasingly trying to apply American solutions to British problems, despite our fundamentally different historical and cultural contexts. It’s like trying to fix a cricket pitch using baseball groundskeeping techniques – the tools might be similar, but the requirements are fundamentally different.

The AI difference: From learning to modification.

But AI represents something entirely new. Unlike traditional media that we passively consumed, or social media that we actively engaged with, AI systems are actively modifying our behaviour in real time. This isn’t just learning through exposure – it’s direct behavioural correction, shaping not only our wording but sometimes our underlying instincts and reactions.

Let me share a recent experience that illustrates this. A fellow Englishman posted on LinkedIn about an event where, during his speech, some senior members of the audience needed medical attention. Referring to his speech I asked “blimey, what did you say?” – a characteristically British bit of levity in an awkward moment.

When I ran this exchange through Claude.ai, it didn’t just suggest a different approach – it actively corrected my behaviour, insisting my response was inappropriate and providing a more “professional” alternative.

Even after I explained this was a conversation between two Brits, it maintained that this humour was inappropriate for LinkedIn – effectively asking me to be less British in my communication.

The evolution of cultural influence has taken a significant leap. Where traditional media simply showed us different ways of doing things, social media encouraged certain behaviours through social reinforcement. Now, AI takes this to an unprecedented level by actively correcting and modifying our behaviour in real time, based on its training data.

What’s particularly striking is how easily we might miss these shifts. If you haven’t noticed AI’s impact on your behaviour, it might be because you’ve already adapted to its suggestions – an unconscious drift towards new cultural norms.

The corporate implications.

This capability raises fascinating implications for businesses. Imagine how this technology could evolve: a company might create its own front-end connected to AI and give it to employees. If trained on company values and ethos, such a tool could influence behaviour to align with corporate culture – not through traditional policy or training, but through constant, real-time suggestions. This represents a shift from geographic and societal cultures shaping our behaviour to corporate cultures doing so.

We’re already seeing early signs through tools like Microsoft Editor, which offers stylistic suggestions and tone recommendations. When it flags certain phrases as ‘too informal,’ it’s not just enforcing professional standards – it’s potentially overriding cultural expressions of respect, humour, or empathy in favour of corporate cultural norms. What happens when these AI systems begin shaping not just how we write, but how we interact, think, and behave within corporate environments?

The stealth factor.

What makes this potential for corporate influence particularly concerning is its subtlety. Unlike traditional advertising or corporate training, which we recognise as attempts to influence behaviour, AI’s corrections feel more like helpful suggestions – they work not just logically, but psycho-logically, bypassing our usual defences against behavioural modification. Like a skilled magician misdirecting attention, AI’s nudges reshape our communication patterns while we focus on its helpful intent. We might not even realise how our natural responses are being gradually reshaped.

You could apply this same approach to clients or prospects, providing a platform not directly associated with your product or service, but subtly guiding any interaction towards your business objectives. It’s advertising evolved into something far more nuanced – and potentially more effective.

A meta-layer of AI collaboration.

AI can offer tremendous benefits. Beyond bridging language barriers and facilitating global collaboration, there’s also a deeper meta-layer. Even this article reflects that subtle influence: multiple AI systems – Claude, ChatGPT, Gemini, and Grok – collaborated to refine these ideas, each nudging the narrative in small but significant ways. Used thoughtfully, such tools can enhance our cultural expression, helping us articulate concerns we might otherwise overlook.

What’s particularly telling is how these AI systems have developed distinct approaches and characteristics. Some are more analytical and measured, others more casual and conversational, while others excel at creative expression or technical precision.

This differentiation isn’t accidental – it reflects a fundamental recognition within the AI industry that diversity in thinking and communication styles has inherent value.

This observation reveals a fascinating contradiction: while individual AI interactions might push us toward standardisation, the AI industry’s overall development suggests that homogeneity is neither desirable nor effective. Companies are deliberately developing multiple AI models with distinct characteristics, tacitly acknowledging that different situations and users require different approaches.

The wake-up call.

The risk isn’t just about language or communication styles – it’s about the gradual erosion of cultural distinctiveness through the quiet, persistent nudging of AI systems trained primarily on dominant cultural norms. We’re seeing this erosion happen in real-time, often too subtle to notice until we step back and observe the cumulative effect. Much as a river slowly shapes the landscape, these AI-driven changes are reshaping our cultural terrain, smoothing out the distinctive features that make our various cultures unique and valuable.

This isn’t about resisting all change – cultures have always evolved through contact with each other. But there’s a crucial difference between organic cultural evolution and AI-driven behavioural modification. When cultures naturally interact, they tend to blend while preserving core elements of their distinctiveness. The risk with AI-driven change is its potential to inadvertently flatten these distinctions in pursuit of standardisation and efficiency.

Finding balance and the way forward.

While some degree of standardisation in professional communication can facilitate global collaboration and understanding, the question isn’t whether to resist all cultural homogenization, but rather how to maintain our essential cultural distinctiveness while benefiting from improved global communication. Just as ecological systems thrive through diversity, our global discourse benefits from preserving different ways of thinking and expressing ourselves.

As we navigate this new landscape, we must grapple with several critical questions.

Who decides what’s ‘appropriate’ in how we express and conduct ourselves? How do we preserve our cultural distinctiveness in an AI-mediated world? And perhaps most importantly, what are we willing to sacrifice for the sake of standardisation and efficiency?

After all, in a world increasingly shaped by AI, maybe a little less ‘professionalism’ and a little more authenticity is what we need to keep our workplaces – and our cultures – truly human.

Nathan Green - Founder

Nathan Green – Founder
Dedicated to inspiring passion and purpose through innovative software solutions, empowering businesses and individuals to overcome challenges and reach their fullest potential.
Connect with Nathan on LinkedIn

Give us a call or fill in the form below and we will contact you. We endeavour to answer all enquiries within 24 hours on business days.