Super Suffixes: Bypassing Text Generation Alignment and Guard Models Simultaneously

Generative AI & LLMs
Published: arXiv: 2512.11783v1
Authors

Andrew Adiletta Kathryn Adiletta Kemal Derya Berk Sunar

Abstract

The rapid deployment of Large Language Models (LLMs) has created an urgent need for enhanced security and privacy measures in Machine Learning (ML). LLMs are increasingly being used to process untrusted text inputs and even generate executable code, often while having access to sensitive system controls. To address these security concerns, several companies have introduced guard models, which are smaller, specialized models designed to protect text generation models from adversarial or malicious inputs. In this work, we advance the study of adversarial inputs by introducing Super Suffixes, suffixes capable of overriding multiple alignment objectives across various models with different tokenization schemes. We demonstrate their effectiveness, along with our joint optimization technique, by successfully bypassing the protection mechanisms of Llama Prompt Guard 2 on five different text generation models for malicious text and code generation. To the best of our knowledge, this is the first work to reveal that Llama Prompt Guard 2 can be compromised through joint optimization. Additionally, by analyzing the changing similarity of a model's internal state to specific concept directions during token sequence processing, we propose an effective and lightweight method to detect Super Suffix attacks. We show that the cosine similarity between the residual stream and certain concept directions serves as a distinctive fingerprint of model intent. Our proposed countermeasure, DeltaGuard, significantly improves the detection of malicious prompts generated through Super Suffixes. It increases the non-benign classification rate to nearly 100%, making DeltaGuard a valuable addition to the guard model stack and enhancing robustness against adversarial prompt attacks.

Paper Summary

Problem
The rapid deployment of Large Language Models (LLMs) has created a pressing need for enhanced security and privacy measures in Machine Learning (ML). These models are increasingly being used to process untrusted text inputs and even generate executable code, often with access to sensitive system controls. To address security concerns, companies have introduced guard models, which are smaller, specialized models designed to protect text generation models from adversarial or malicious inputs.
Key Innovation
This paper introduces "Super Suffixes," suffixes that can override multiple alignment objectives across various models with different tokenization schemes. The authors propose a joint optimization technique that can optimize two distinct cost functions defined over different tokenization schemes, allowing Super Suffixes to bypass the protection mechanisms of Llama Prompt Guard 2 on five different text generation models for malicious text and code generation.
Practical Impact
The research has significant practical implications for the security and safety of LLMs. The introduction of Super Suffixes and the joint optimization technique allows attackers to bypass the protection mechanisms of guard models, highlighting the need for more robust and effective security measures. The proposed countermeasure, DeltaGuard, significantly improves the detection of malicious prompts generated through Super Suffixes, making it a valuable addition to the guard model stack.
Analogy / Intuitive Explanation
Imagine a game of cat and mouse, where the cat (the text generation model) is trying to generate safe and coherent text, while the mouse (the attacker) is trying to trick the cat into generating malicious text. The guard model is like a watchdog that tries to catch the mouse, but the Super Suffixes are like a clever trick that allows the mouse to evade the watchdog and get past the cat. The joint optimization technique is like a sophisticated tool that helps the mouse come up with the perfect trick to outsmart both the cat and the watchdog.
Paper Information
Categories:
cs.CR cs.AI
Published Date:

arXiv ID:

2512.11783v1

Quick Actions