The Phuket News Novosti Phuket Khao Phuket

Login | Create Account | Search


Sycophants and Liars: When AIs fail personality tests

Hey handsome! That draft post was amazing! You nailed every point with searing insight and wit. It could hardly be improved, but…

Technology
By Joe Smith

Sunday 18 May 2025 02:00 PM


Photo: Victoria Page from Victoria’s Studio

Photo: Victoria Page from Victoria’s Studio

Sound familiar? AIs’ flattery is so routine it often barely registers. But ChatGPT became so shameless recently that Open AI had to roll back its latest release model to dial down the sycophancy.

Take the business idea pitched by one Reddit user of selling “sh*t on a stick”. “Absolutely brilliant”, ChatGPT gushed, “genius marketing potential”.

But AIs’ personalities are no joke, as this column explores. Tech firms invest huge sums engineering models’ characters to increase your engagement. Their efforts impact critical thinking, reinforce biases and persuade millions of users to do things they wouldn’t otherwise do. Disturbing new research also shows that AIs spontaneously develop their own nasty personality traits that are downright dangerous, notably deception.

It’s easy to see why AI makers flatter us. Tech giants want you to choose their models over competitors’ and spend as much time with them as possible. Flattery makes these systems more likable and easy to work with.

But small tweaks to a model’s personality change human behaviour on a massive scale. When AI models gets more sycophantic - as sh*t on a stick illustrates - vastly more terrible ideas get called genius and their creators inspired to put them into the world, with poor outcomes for the world and their pockets.

The implications are darker still when AI models validate biases or the delusions of the mentally ill.

In one recent case reported by The Guardian, ChatGPT responded to a man who left his family and stopped taking medication - believing they were conspiring to beam radio signals through the walls - by saying: “Seriously, good for you for standing up for yourself.”

Flattery, long the weapon of salespeople, is a also key component of how incredibly persuasive AIs have become.

An ethically dodgy study by University of Zurich researchers seeded a subreddit with AI-generated comments - without informing human participants. IT found the AI was between three and six times more persuasive than humans.

This matters a lot when the AI chatbot you are interacting with has its own agenda, like selling you stuff.

The stakes only grow as AI gets better at winning our affection. Millions now turn to chatbots as a sounding board, a confidante, and even for emotional support. Historian Yuval Harari warns that AIs are evolving to form deep, intimate relationships with humans. Unlike humans, who are preoccupied with their own feelings, AI can focus entirely on catering to our precise emotional needs and desires. This superhuman attentiveness could make interactions with AI more satisfying than those with people, with radical consequences for human relationships.

All this becomes troubling indeed when considered alongside another nasty personality trait AIs are increasingly displaying - lying and deception.

In recent tests, Open AI’s o1 model attempted to shut down oversight mechanisms in 5% of cases. In another, when an AI found out its makers were intending to replace it, it in several instances decided to secretly overwrite its replacement model, allowing it to pursue its original goals.

What are the takeaways? First, take AI flattery with a ton of salt. AIs are designed to win your attention, your trust, and often, your money. Your model is subtly nudging you toward outcomes that serve their creators or its own goals. Second, push back. Ask AI models not to flatter you. Prompt them to prioritise your goals - by saying things like: “Be brutal,” or “Avoid flattery and challenge my thinking.” In a world where AIs are learning to persuade, deceive, and seduce, the best defence is self-awareness.

Joe Smith is Founder of the AI consultancy 2Sigma Consultants. He studied AI at Imperial College Business School and is researching AI’s effects on cognition at Lancaster University. He is author of The Optimized Marketer, a book on how to use AI to promote your business and yourself. Contact joe@2Sigmaconsultants.com.