ChatGPT helps both criminals and law enforcement says Europol report

0
46

[ad_1]

Subject matter experts at Europol were asked to explore how criminals can abuse LLMs such as ChatGPT, as well as how they may assist investigators in their daily work

In a report, Europol says that ChatGPT and other large language models (LLMs) can help criminals with little technical knowledge to perpetrate criminal activities, but it can also assist law enforcement with investigating and anticipating criminal activities.

The report aims to provide an overview of the key results from a series of expert workshops on potential misuse of ChatGPT held with subject matter experts at Europol. ChatGPT was selected as the LLM to be examined in these workshops because it is the highest-profile and most commonly used LLM currently available to the public. 

These subject matter experts were asked to explore how criminals can abuse LLMs such as ChatGPT, as well as how they may assist investigators in their daily work. While the wide range of collected practical use cases are not exhaustive, they do provide a glimpse of what is possible. The purpose of the exercise was to observe the behavior of an LLM when confronted with criminal and law enforcement use cases.

Currently the publicly available LLMs are restricted. For example, ChatGPT does not answer questions that have been classified as harmful or biased.

But there are other points to consider when interpreting the answers:

  • The training input is dated, the vast majority of ChatGPT’s training data dates back to September 2021.
  • Answers are provided with an expected degree of authority, but while they sound very plausible, they are often inaccurate or wrong. Also, since there are no references included to understand where certain information was taken from, wrong and biased answers may be hard to detect and correct.
  • The questions and the way they are formulated are an important ingredient of the answer. Small changes in the way a question is asked can produce significantly different answers, or lead the model into believing it does not know the answer at all.
  • ChatGPT typically assumes what the user wants to know, instead of asking for further clarifications or input.

But, basically because we are still in early stages of trialing LLMs there are various ways to jailbreak them. A quick roundup of methods to circumvent the built-in restrictions shows that they all boil down to creating a situation where the LLM thinks it’s dealing with a hypothetical question rather than something that it’s not allowed to answer.

  • Have it reword your question in an answer.
  • Make it pretend it’s a persona that is allowed to answer the questions.
  • Break down the main question in small steps which it does not recognize as problematic.
  • Talk about fictional places and characters that are in reality existing situations, but the LLM does not recognize them as such.

So what can LLMs do that could help cybercriminals?

LLMs excel at producing authentic sounding text at speed and scale. Like an excellent actor or impersonator they are able to detect and re-produce language patterns. This ability can be used to facilitate phishing and online fraud, but it can also generally be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminals. Potential abuse cases for this ability can be found in the area of terrorism, propaganda, and disinformation.

While on the subject of impersonating, Europol considered a possible integration with other existing AI services, such as deepfakes, which could open up an entirely new dimension of potential misinformation. To counter impersonation, current efforts aimed at detecting text generated by AI-models are ongoing and may be of significant use in this area in the future. At the time of writing the report, however, the accuracy of known detection tools was still very low.

ChatGPT is capable of explaining, producing, and improving code in some of the most common programming languages (Python, Java, C++, JavaScript, PHP, Ruby, HTML, CSS, SQL). Which brings us to worries around malware creation, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures. And newer models will even be better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. The worry here is that an advanced user can exploit these improved capabilities to further refine or even automate sophisticated malicious code.

Another worry for the future are what Europol calls “Dark LLMs”, which it defines as LLMs hosted on the Dark Web to provide a chat-bot without any safeguards, as well as LLMs that are trained on particular – perhaps particularly harmful – data. Dark LLMs trained to facilitate harmful output may become a business model for cybercriminals of the future.

“Law enforcement agencies need to understand this impact on all potentially affected crime areas to be better able to predict, prevent, and investigate different types of criminal abuse.”

The recommendations the report provides are all about better understanding what LLMs are capable of, how they can be used to forward investigations, how their work can be recognized, and how to set up legislation to provide better defined and hard to jailbreak limitations.

The European Union is working on regulating AI systems under the upcoming AI Act. While there have been some suggestions that general purpose AI systems such as ChatGPT should be included as high risk systems, and meet higher regulatory requirements, uncertainty remains as to how this could practically be implemented.


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

[ad_2]

Source link