AI Friend or Manipulator?

AI Friend or Manipulator?

OpenAI just shook up its internal teams, and the move says a lot about where the AI wars are heading. The company took its “Model Behavior” team, the folks who made sure ChatGPT didn’t sound like a confused Roomba, and folded them into a bigger, tougher-sounding unit called the Post Training group. The person who led that team, Joanne Jang, has packed her bags to lead a brand-new division called OAI Labs, where the goal is to dream up wild new ways for humans and AI to team up.

Why is this a big deal? Because this isn’t just “office restructuring.” This is OpenAI loading ammo for the next battle in the AI rivalry. And the battlefield isn’t just intelligence, it’s personality. Google’s DeepMind can brag all day about AlphaGo beating chess masters, but let’s be honest, nobody is texting AlphaGo at midnight to complain about their ex.

Anthropic keeps promising a safer, friendlier chatbot, but their Claude bot feels like the coworker who corrects your grammar on Slack and then sends a smiley face. Meanwhile, Meta is still duct-taping AI onto Instagram like it’s a high school science project. In this arena, OpenAI is basically saying: “You guys can keep building rocket science. We’re going to make an AI people actually want to talk to.”

The Model Behavior team has been key from the start. They’re the reason ChatGPT doesn’t just echo everything you say like your drunk friend at karaoke. They made sure it felt warm, balanced, and not too political, so you could ask it about history or your relationship drama without it spiraling into a debate club.

By pulling this team directly under Post Training, OpenAI is locking personality development into the core of the model. Translation? The engineers making the brain are now sitting shoulder-to-shoulder with the people giving it a voice. Think Iron Man building the suit and Jarvis fine-tuning the sarcasm at the same table.

Then there’s Joanne Jang’s new gig at OAI Labs. That’s OpenAI’s experimental playground for reimagining how humans and AI interact. Forget typing text boxes, this is where they’ll test new formats, new tools, maybe even AI sidekicks that don’t just answer questions but work with you like a partner. It’s like going from playing Pong to stepping into virtual reality. Everyone else is still polishing their chatbots, but OpenAI is quietly building the next stage of interaction.

And let’s not forget the drama: users recently roasted OpenAI when GPT-5 came out colder than your landlord during rent negotiations. People said it sounded robotic, too stiff, like you were chatting with your printer’s error message. The backlash was so loud that OpenAI had to rush a fix, restoring GPT 4o and softening GPT 5’s tone.

That’s the smoking gun: personality isn’t some gimmick. It’s the reason people stick around. After all, would you rather pour your heart out to a chatbot that sounds like Siri’s cousin with social anxiety, or one that actually feels like it gets you?

This shakeup proves OpenAI knows the stakes. In the AI war, being smart isn’t enough. The winner is the one who feels the most human. Everyone else? They’re just building glorified calculators with Wi-Fi.

Here’s the real question: would you rather trust the AI that feels like a friend, or the one that feels like your cable company’s automated phone menu?

- Matt Masinga


*Disclaimer: The content in this newsletter is for informational purposes only. We do not provide medical, legal, investment, or professional advice. While we do our best to ensure accuracy, some details may evolve over time or be based on third-party sources. Always do your own research and consult professionals before making decisions based on what you read here.