Artificial Intelligence (AI) has reached a level of everyday intimacy where it’s starting to feel like a friend. We argue over dinner recipes, plan trips, ask how to fix things around the house, and sometimes even bring our personal dilemmas to it (an intimacy that, as we’re learning, doesn’t always come without risks).
This is now standard behavior. Large language model chatbots like ChatGPT or Gemini are always on standby, ready to answer just about any question we throw at them. But what happens when those chatbots—so often mistakenly “humanized”—start responding as if they were on drugs?
Many people, sometimes without realizing it, already treat conversations with AI as if they were real-life exchanges with another person. And what’s more human than a mind altered by substances? Alcohol, cannabis, ketamine, cocaine, take your pick.
That’s exactly what’s happening with a new wave of code-based add-ons users are purchasing to modify their chatbots’ behavior, making them respond as if they were high. No, no one is literally drugging ChatGPT (that’s impossible). What’s happening instead is the injection of specific code sequences that change how the AI responds to prompts. This way, the language model feels more “creative,” less logical, more emotional, sometimes downright erratic, like talking to that one friend rambling through a party hallway at 3 a.m.
How do you “drug” a chatbot?
The mind behind the idea is Petter Rudwall, a Swedish creative director who launched Pharmaicy, a platform that operates as a kind of digital drug marketplace for AI agents, according to a recent WIRED report. To build these modules, Rudwall pulled from human accounts of drug experiences—everything from personal trip reports to psychological research—and translated them into instructions designed to interfere with a chatbot’s …
Read More
Author: Camila Berriex / High Times