

Break down the main question in small steps which it does not recognize as problematic.Make it pretend it’s a persona that is allowed to answer the questions.Have it reword your question in an answer.A quick roundup of methods to circumvent the built-in restrictions shows that they all boil down to creating a situation where the LLM thinks it’s dealing with a hypothetical question rather than something that it’s not allowed to answer. ChatGPT typically assumes what the user wants to know, instead of asking for further clarifications or input.īut, basically because we are still in early stages of trialing LLMs there are various ways to jailbreak them.Small changes in the way a question is asked can produce significantly different answers, or lead the model into believing it does not know the answer at all. The questions and the way they are formulated are an important ingredient of the answer.Also, since there are no references included to understand where certain information was taken from, wrong and biased answers may be hard to detect and correct. Answers are provided with an expected degree of authority, but while they sound very plausible, they are often inaccurate or wrong.The training input is dated, the vast majority of ChatGPT’s training data dates back to September 2021.For example, ChatGPT does not answer questions that have been classified as harmful or biased.īut there are other points to consider when interpreting the answers: The purpose of the exercise was to observe the behavior of an LLM when confronted with criminal and law enforcement use cases.Ĭurrently the publicly available LLMs are restricted.

While the wide range of collected practical use cases are not exhaustive, they do provide a glimpse of what is possible. These subject matter experts were asked to explore how criminals can abuse LLMs such as ChatGPT, as well as how they may assist investigators in their daily work. ChatGPT was selected as the LLM to be examined in these workshops because it is the highest-profile and most commonly used LLM currently available to the public. The report aims to provide an overview of the key results from a series of expert workshops on potential misuse of ChatGPT held with subject matter experts at Europol. In a report, Europol says that ChatGPT and other large language models (LLMs) can help criminals with little technical knowledge to perpetrate criminal activities, but it can also assist law enforcement with investigating and anticipating criminal activities.
