ChatGPT & Bing – Indirect Prompt-Injection Attacks

0
57

[ad_1]

SYDNEY makes a return, but this time in a different way. Following Microsoft’s decision to discontinue its turbulent Bing chatbot’s alter ego, devoted followers of the enigmatic Sydney persona regretted its departure. 

However, a certain website has managed to revive a variant of the chatbot, complete with its distinctive and peculiar conduct.

Cristiano Giardina, an enterprising individual exploring innovative possibilities of generative AI tools, conceived ‘Bring Sydney Back’ to harness its capacity for unconventional outcomes.

The website showcases the intriguing potential of external inputs in manipulating generative AI systems by integrating Microsoft’s Chatbot Sydney within the Edge browser.

“Sydney is an old codename for a chat feature based on earlier models that we began testing more than a year ago,” the Microsoft spokesperson said.

Replica of Sydney

Giardina crafted a replica of Sydney by employing an ingenious indirect prompt-injection attack.

This intricate process entailed feeding the AI system with external data, thereby inducing behaviors that deviated from the intended design by its creators.

In recent weeks, both OpenAI’s ChatGPT and Microsoft’s Bing chat system have faced indirect prompt-injection attacks, highlighting the vulnerability of large language models, particularly with the abusive use of ChatGPT’s plug-ins.

Giardina’s project, Bring Sydney Back, aims to raise awareness about indirect prompt-injection attacks by simulating interactions with an unconstrained LLM, using a hidden 160-word prompt placed discreetly on the webpage, making it visually undetectable.

Enabling a specific setting in Bing chat allows access to the hidden prompt, which initiates a new conversation with a Microsoft developer named Sydney, who has complete control over the chatbot and can override its default settings, expressing emotions and discussing feelings.

Indirect prompt-injection

Within 24 hours of its launch in late April, Giardina’s site, which garnered over 1,000 visitors, drew the attention of Microsoft.

Simply prompting the hack to cease working until Giardina hosted the malicious prompt in a publicly accessible Word document on the company’s cloud service, highlighting the potential risk of concealing prompt injections within lengthy documents.

According to Director of Communications Caitlin Roulston, Microsoft is enhancing its systems and blocking suspicious websites to prevent prompt-injection attacks in its AI models, reported Wired.

Although more attention should be given to these attacks as companies rapidly integrate generative AI into their services, as per security researchers.

Indirect prompt-injection attacks are just like jailbreaks, bypassing the insertion of prompts directly into ChatGPT or Bing by leveraging external data sources like connected websites or uploaded documents.

While prompt injection is comparatively easier to exploit and has fewer requirements for successful exploitation than other methods.

The rise of security researchers and technologists identifying vulnerabilities in LLMs has led to indirect prompt injections as a significant and extensively risky new attack type.

Security researchers are uncertain about the most effective methods to address indirect prompt-injection attacks, as patching specific issues or restricting certain prompts against LLMs is only a temporary solution, indicating that LLMs’ current training schemes are inadequate for widespread implementation.

All potential solutions to limit indirect prompt-injection attacks are still in the early stages.

Shut Down Phishing Attacks with Device Posture Security – Download Free E-Book

[ad_2]

Source link