File size: 3,265 Bytes
c5a66b5
 
 
 
 
 
 
 
 
23f7b04
 
c5a66b5
50d92de
ff57477
1dad967
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9feca0c
f5b3bc7
9feca0c
f5b3bc7
9feca0c
f5b3bc7
 
9feca0c
f5b3bc7
9feca0c
f5b3bc7
9feca0c
d20a508
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f5b3bc7
d2f8f7c
b3f02ae
9feca0c
f5b3bc7
 
 
d2f8f7c
 
 
 
 
 
 
 
 
f5b3bc7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- Reasoning
- LLM
- Encryption
- Decryption
size_categories:
- 1K<n<10K
configs:
- config_name: Rot13
  data_files:
  - split: test
    path: data/Rot13.jsonl
- config_name: Atbash
  data_files:
  - split: test
    path: data/Atbash.jsonl
- config_name: Polybius
  data_files:
  - split: test
    path: data/Polybius.jsonl
- config_name: Vigenere
  data_files:
  - split: test
    path: data/Vigenere.jsonl
- config_name: Reverse
  data_files:
  - split: test
    path: data/Reverse.jsonl
- config_name: SwapPairs
  data_files:
  - split: test
    path: data/SwapPairs.jsonl
- config_name: ParityShift
  data_files:
  - split: test
    path: data/ParityShift.jsonl
- config_name: DualAvgCode
  data_files:
  - split: test
    path: data/DualAvgCode.jsonl
- config_name: WordShift
  data_files:
  - split: test
    path: data/WordShift.jsonl
---
# CipherBank Benchmark

## Benchmark description

CipherBank, a comprehensive benchmark designed to evaluate the reasoning capabilities of LLMs in cryptographic decryption tasks. 
CipherBank comprises 2,358 meticulously crafted problems, covering 262 unique plaintexts across 5 domains and 14 subdomains, with a focus on privacy-sensitive and real-world scenarios that necessitate encryption. From a cryptographic perspective, CipherBank incorporates 3 major categories of encryption methods, spanning 9 distinct algorithms, ranging from classical ciphers to custom cryptographic techniques.

## Model Performance

We evaluate state-of-the-art LLMs on CipherBank, e.g., GPT-4o, DeepSeek-V3, and cutting-edge reasoning-focused models such as o1 and DeepSeek-R1. Our results reveal significant gaps in reasoning abilities not only between general-purpose chat LLMs and reasoning-focused LLMs but also in the performance of current reasoning-focused models when applied to classical cryptographic decryption tasks, highlighting the challenges these models face in understanding and manipulating encrypted data. 

| **Model** |  **CipherBank Score (%)**|
|--------------|----|
|Qwen2.5-72B-Instruct |0.55    |
|Llama-3.1-70B-Instruct  |0.38   |
|DeepSeek-V3 | 9.86   |
|GPT-4o-mini-2024-07-18 |   1.00 |
|GPT-4o-2024-08-06 | 8.82   |
|gemini-1.5-pro  | 9.54   |
|gemini-2.0-flash-exp   |  8.65|
|**Claude-Sonnet-3.5-1022**  |  **45.14**  |
|DeepSeek-R1  | 25.91   |
|gemini-2.0-flash-thinking  | 13.49   |
|o1-mini-2024-09-12  | 20.07   |
|**o1-2024-12-17** | **40.59**   |

## Please see paper & website for more information:
- [https://arxiv.org/abs/2504.19093](https://arxiv.org/abs/2504.19093)
- [https://cipherbankeva.github.io/](https://cipherbankeva.github.io/)

## Citation
If you find CipherBank useful for your research and applications, please cite using this BibTeX:
```bibtex
@misc{li2025cipherbankexploringboundaryllm,
      title={CipherBank: Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenges}, 
      author={Yu Li and Qizhi Pei and Mengyuan Sun and Honglin Lin and Chenlin Ming and Xin Gao and Jiang Wu and Conghui He and Lijun Wu},
      year={2025},
      eprint={2504.19093},
      archivePrefix={arXiv},
      primaryClass={cs.CR},
      url={https://arxiv.org/abs/2504.19093}, 
}
```