A New Attack Impacts ChatGPT—and No One Knows How to Stop It
“Making models more resistant to prompt injection and other adversarial ‘jailbreaking’ measures is an area of active research,” says Michael…
5 Min Read
“Making models more resistant to prompt injection and other adversarial ‘jailbreaking’ measures is an area of active research,” says Michael…
Sign in to your account