If you teach a chatbot how to read ASCII art, it will teach you how to make a bomb [TechSpot]

View Article on TechSpot

University researchers have developed a way to “jailbreak” large language models like Chat-GPT using old-school ASCII art. The technique, aptly named “ArtPrompt,” involves crafting an ASCII art “mask” for a word and then cleverly using the mask to coax the chatbot into providing a response it shouldn’t.

Read Entire Article