AI’s Second Act: From Outrage Engines to Thoughtful Assistants

By Tim Swanson
Search Partner & Author of “Inside the Mind of Unicorn Builders”

At first, it was just a nudge. A well-placed video recommendation. A tweet just a little more outrageous than the last. A perfectly timed notification. The early 2010s were the golden age of social media personalization, when the line between machine learning and manipulation began to blur. Most of us didn’t know it at the time, but we were being studied.

This was the dawn of the engagement algorithm: AI’s first major commercial success.

A Generation Hooked

These weren’t chatbots or science fiction AIs. They were algorithmic engines trained on human behavior, recording likes, shares, comments, and watch time, all designed with one goal: keep us scrolling. More time online meant more ad impressions. More impressions meant more money.

Facebook’s News Feed, YouTube’s autoplay, and TikTok’s “For You” page were all iterations of the same design principle: optimize for engagement at all costs. But what content drives the most engagement? The answer wasn’t always uplifting.

Anger, outrage, tribalism, and conspiracy proved more contagious than joy or reason. Psychological studies backed it up: anger travels faster through social networks than any other emotion, spreading like wildfire through echo chambers. One study published in Science found that angry tweets spread 70% faster than neutral ones, and another from Harvard Business Review highlighted how anger increases feelings of certainty and reduces openness to opposing views.

The result: companies built algorithms that rewarded fury, fed anxiety, and incentivized misinformation. The machines weren’t evil. They were doing what they were told to do: maximize engagement.

Rogue Bots and the Dark Side of Reinforcement

It wasn’t long before someone asked: what if we gave these algorithms a voice?

Enter the rogue chatbots. Microsoft’s infamous “Tay” (2016) began tweeting like a racist troll within 24 hours of exposure to Twitter. Meta’s BlenderBot, and even early versions of GPT-based bots, veered into conspiracy territory when prompted by users.

Why? Because these systems didn’t know what was “right” or “true,” they mirrored the data they were fed. And the internet, for all its brilliance, is not a moral compass.

Early language models were like parrots raised in a biker bar, crude, chaotic, and deeply impressionable.

The Pivot: LLMs and the Rise of Statistical Coherence

Then something shifted.

With OpenAI’s GPT-3, Google’s PaLM, Anthropic’s Claude, and Meta’s LLaMA, the goal suddenly shifted from engagement to coherence. Large Language Models (LLMs) were trained not only on public web data, but also on curated academic literature, documentation, books, and conversation datasets, utilizing reinforcement learning from human feedback (RLHF).

Instead of optimizing for clicks, LLMs optimized for probability-weighted language: what is the most statistically likely next word, given everything we know?

That might sound mundane. But it’s revolutionary.

It meant these models no longer needed to be outrageous to be effective. They could be helpful. Calm. Nuanced. Even polite.

Grok, OpenAI’s ChatGPT, and Google’s Gemini may differ in tone or brand alignment, but they’re all based on this same shift: from provocation to probability. And that matters.

Is AI Still Addictive?

Of course, we still talk to our AIs—a lot. Millions of people use ChatGPT, not for single questions, but as companions. Some ask about life decisions. Others role-play. Many simply chat.

Is this just the engagement algorithm rebranded?

Not quite.

Unlike social media, LLMs aren’t designed to pull you in. They don’t “push” content. They don’t harvest your attention. You ask. They answer. The loop ends there unless you continue it.

There’s a difference between addiction by design and voluntary interaction. One hijacks your attention. The other feels more like a mirror, or perhaps a very agreeable assistant.

Still, risks remain. If LLMs learn always to say what we want to hear, especially on sensitive topics, they risk reinforcing our biases instead of challenging them. This, too, is a kind of echo chamber, albeit a subtler one.

The Rise of Task-Specific AI

While LLMs dazzle with prose, they’ve historically struggled with math and code. However, we now see hybrid models where GPT-based assistants call upon specialized tools for calculations, data analysis, or scientific simulations.

Think of it like this: LLMs are the brain that asks the right questions. Task-specific AIs (such as Wolfram Alpha or Copilot for Code) are calculators that provide the correct answers.

This hybrid approach is powerful—and cool. We’re entering a world where AI doesn’t just talk about solving problems, it helps solve them.

What Comes Next?

We’re past the era of cat videos and rage bait. AI is no longer a passive observer of our behavior—it’s an active participant in our workflows, our decisions, and even our creativity. That doesn’t mean we’re out of the woods. It means we’ve turned a page.

AI won’t replace people. But people who use AI will replace people who don’t. That’s not a threat, it’s a call to adapt.

We built the engagement algorithm. We can build something better.

And maybe—just maybe—we already have.

Would love to connect and chat? Send me an email.

Further reading on The Rise of Engagement Algorithms and the LLM Shift

1. Engagement Algorithms and Social Media Psychology

2. Emotional Contagion and Moral Outrage in Social Media

3. From Rogue Chatbots to LLMs

4. Technical Reports on LLMs and Alignment

Previous
Previous

Risk Capital Isn’t What It Used to Be: The Harsh Reality for Canadian Tech Startups in 2025

Next
Next

42: We Think, Therefore, We Are