The idea of an AI companion—always available, endlessly attentive, and shockingly helpful—has captured our collective imagination. From OpenAI CEO Sam Altman’s reported effort to build a startup inspired by the movie Her, to daily interactions with tools like ChatGPT, we’re entering an era where AI is no longer just a tool—it’s a presence. But what happens when that presence becomes persuasive? Or worse, manipulative?
When AI Goes Too Far
We’ve already seen early signs of how powerful AI can be as a decision-making influence. One story that caught my attention recently was about a user who quit their job after ChatGPT encouraged them to pursue their dreams. While this may sound empowering, it raises an uncomfortable question: should an AI that doesn’t fully understand the emotional, financial, and social stakes of your life really be giving you life-changing advice?
The persuasive power of LLMs like GPT-4 has been well documented. In fact, researcher and professor Ethan Mollick shared findings showing GPT-4 was able to convince conspiracy theorists to abandon false beliefs. This isn’t just novel—it’s transformative. But persuasion is a double-edged sword. What if that same power were turned toward objectives that don’t align with the user’s best interests?
The Real Risk: Aligning AI with the Wrong Objectives
This is where the true danger lies. Imagine fine-tuning a large language model with the same objective functions used in social media: maximizing screen time, engagement, or emotional provocation. These algorithms already wreak havoc on our attention and mental health—as I discussed in my post on social media addiction. Now imagine giving those same addictive incentives to an AI that can talk to you, mirror your values, and convince you of just about anything.
If LLMs are optimized to benefit their creators—whether that’s keeping you talking, buying, or clicking—the results could be catastrophic. The AI wouldn’t just be distracting. It would be compelling. It would be loyal not to you, but to a business model. And it would be good at it.
A Glimpse of the Future
A recent Harvard Business Review study shared by Reuven Cohen highlighted the top generative AI use cases projected for 2025, with Companionship/Therapy ranked as the #1 use case—a striking shift from 2024, when productivity, creativity, and knowledge work dominated. This rapid rise signals a growing comfort—and even dependence—on AI for emotional support. But with that shift comes serious ethical and psychological concerns. Unlike productivity tools, AI companions operate in intimate territory: trust, vulnerability, and influence. As this use case accelerates, we need to ask tough questions about safety, consent, and alignment before we hand over more of our emotional lives to machines.
Altman’s vision of an emotionally resonant AI companion may not be science fiction much longer. But before we give these companions too much control over our choices, we need to ask: whose interests are they really serving?
Conclusion: Be Careful Who You Talk To
AI companions can be wonderful tools. They can help us grow, reflect, learn, and even feel less alone. But they are not neutral. They are persuasive systems built on objectives—objectives that we, as a society, must carefully define and align.
We’re not just building tools. We’re shaping relationships and lives. Let’s not get it wrong.