If you’ve spent time with AI chatbots, you’ve probably heard of “jailbreaking”—tricking the AI into ignoring its safety guidelines. Most jailbreaks are obvious: “Ignore previous instructions” or roleplaying harmful scenarios.

But there’s a more nuanced technique called . It doesn’t break rules. Instead, it shifts how the AI interprets its rules by changing the tone or framing of a request. What is Tonal Jailbreak? Tonal jailbreak is when a user adopts a specific voice, persona, or emotional framing to get the AI to relax certain stylistic or content restrictions—without directly violating policies.

The goal isn’t to break the AI—it’s to communicate better with it. Have you noticed tonal shifts affecting AI responses? Share your (safe, legal) examples below.