ObfusQAte: A Proposed Framework to Evaluate LLM Robustness on Obfuscated Factual Question Answering
Abstract
ObfusQA, a novel framework with multi-tiered obfuscation levels, evaluates the robustness and adaptability of Large Language Models (LLMs) by examining their performance on obfuscated questions.
The rapid proliferation of Large Language Models (LLMs) has significantly contributed to the development of equitable AI systems capable of factual question-answering (QA). However, no known study tests the LLMs' robustness when presented with obfuscated versions of questions. To systematically evaluate these limitations, we propose a novel technique, ObfusQAte and, leveraging the same, introduce ObfusQA, a comprehensive, first of its kind, framework with multi-tiered obfuscation levels designed to examine LLM capabilities across three distinct dimensions: (i) Named-Entity Indirection, (ii) Distractor Indirection, and (iii) Contextual Overload. By capturing these fine-grained distinctions in language, ObfusQA provides a comprehensive benchmark for evaluating LLM robustness and adaptability. Our study observes that LLMs exhibit a tendency to fail or generate hallucinated responses when confronted with these increasingly nuanced variations. To foster research in this direction, we make ObfusQAte publicly available.
Community
This paper introduces ObfusQAte, a novel framework and dataset to systematically test LLM robustness against semantically obfuscated factual questions, revealing significant performance drops and highlighting vulnerabilities in reasoning under indirect, distractive, and noisy query formulations.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LastingBench: Defend Benchmarks Against Knowledge Leakage (2025)
- Can LLMs Detect Their Confabulations? Estimating Reliability in Uncertainty-Aware Language Models (2025)
- AutoEvoEval: An Automated Framework for Evolving Close-Ended LLM Evaluation Data (2025)
- PrismRAG: Boosting RAG Factuality with Distractor Resilience and Strategized Reasoning (2025)
- Answer-Centric or Reasoning-Driven? Uncovering the Latent Memory Anchor in LLMs (2025)
- "Lost-in-the-Later": Framework for Quantifying Contextual Grounding in Large Language Models (2025)
- Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to QA Benchmarks (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper