Enhancing Adversarial Attacks through Chain of Thought

Published:

Recommended citation: Jingbo Su. "Enhancing Adversarial Attacks through Chain of Thought."

Paper Code

Author: Jingbo Su
North China University of Technology
University of California, Riverside

Abstract

Large language models (LLMs) have demonstrated impressive performance across various domains but remain susceptible to safety concerns. Prior research indicates that gradient-based adversarial attacks are particularly effective against aligned LLMs and the chain of thought (CoT) prompting can elicit desired answers through step-by-step reasoning. This paper proposes enhancing the robustness of adversarial attacks on aligned LLMs by integrating CoT prompts with the greedy coordinate gradient (GCG) technique. Using CoT triggers instead of affirmative targets stimulates the reasoning abilities of backend LLMs, thereby improving the transferability and universality of adversarial attacks. We conducted an ablation study comparing our CoT-GCG approach with Amazon Web Services auto-cot. Results revealed our approach outperformed both the baseline GCG attack and CoT prompting. Additionally, we used Llama Guard to evaluate potentially harmful interactions, providing a more objective risk assessment of entire conversations compared to matching outputs to rejection phrases.

Keywords

LLM safety, Adversarial attacks, Gradient-based attacks, Chain of thought, Llama guard.

Download

Enhancing Adversarial Attacks through Chain of Thought