How ‘many-shot jailbreaking’ can be used to fool AI

Sabrina Ortiz/ZDNET Some artificial intelligence researchers and detractors have long decried generative AI for how it could be used for harm. A new research paper seems to suggest that’s even more possible than some believed. AI researchers have written a paper that suggests “many-shot jailbreaking” can be used to game a large language model (LLM) … Read more