A New Trick Uses AI to Jailbreak AI Models—Including GPT-4 December 05, 2023, 05:01:11 AM A New Trick Uses AI to Jailbreak AI Models—Including GPT-4[html]Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.[/html]Source: A New Trick Uses AI to Jailbreak AI Models—Including GPT-4 (http://ht**://www.wired.c**/story/automated-ai-attack-gpt-4/) Quote Selected