Looking for prompt engineering for the jailbreaks

Greetings, I hope everyone had nice holidays. I am writing thesis on the topic of attack scenarios to large language models does anyone know where I can find incidents of large language models being jailbroken and also a discussion forum where I can discuss these type of scenarios?

What has your research this far yielded?

Unfortunately, I am just starting it

I would start with some initial research on your own first. It’s much easier to answer specific questions than broad ones.

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.