ZiruiWu commited on
Commit
57e0e41
·
verified ·
1 Parent(s): 8ccc747

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -3
README.md CHANGED
@@ -1,3 +1,89 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ ---
6
+ # DreamOn-v0-7B
7
+ DreamOn is a novel discrete diffusion algorithm designed to address the variable-length generation challenge in code infilling. Unlike current discrete diffusion language models, our approach enables dynamic expansion and contraction of mask tokens during inference, providing flexible length control without requiring predetermined canvas sizes.
8
+
9
+ Blog: https://hkunlp.github.io/blog/2025/dreamon/
10
+
11
+ Github: https://github.com/DreamLM/DreamOn
12
+ ## Quick Start
13
+ ```
14
+ import torch
15
+ import time
16
+ from transformers import AutoModel, AutoTokenizer
17
+
18
+ def process_infilling_prompt(prefix, suffix, tokenizer, number_of_mask):
19
+ prefix = [tokenizer.bos_token_id] + tokenizer.encode(prefix, add_special_tokens=False)
20
+ middle = [tokenizer.mask_token_id] * number_of_mask
21
+ suffix = tokenizer.encode(suffix, add_special_tokens=False) + [tokenizer.eos_token_id]
22
+ return prefix + middle + suffix
23
+
24
+ prefix = '''from typing import List
25
+ def has_close_elements(numbers: List[float], threshold: float) -> bool:
26
+ """ Check if in given list of numbers, are any two numbers closer to each other than
27
+ given threshold.
28
+ >>> has_close_elements([1.0, 2.0, 3.0], 0.5)
29
+ False
30
+ >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
31
+ True
32
+ """
33
+ '''
34
+
35
+ suffix = ''' for idx2, elem2 in enumerate(numbers):
36
+ if idx != idx2:
37
+ distance = abs(elem - elem2)
38
+ if distance < threshold:
39
+ return True
40
+
41
+ return False
42
+ '''
43
+ model_path = 'Dream-org/DreamOn-v0-7B'
44
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
45
+
46
+ ## Set the initial mask token length when processing the prompt
47
+ input_ids = process_infilling_prompt(prefix, suffix, tokenizer, 4)
48
+ input_ids = torch.LongTensor([input_ids]).to("cuda")
49
+
50
+ model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16, trust_remote_code=True)
51
+ model = model.to("cuda").eval()
52
+
53
+
54
+ output = model.diffusion_generate(
55
+ input_ids,
56
+ temperature=0.2,
57
+ alg = 'entropy',
58
+ alg_temp = 0,
59
+ top_p = 0.9,
60
+ max_new_tokens = 64, ## Set the maximum number of new tokens for infilling
61
+ return_dict_in_generate = True,
62
+ output_history = True,
63
+ number_transfer_tokens = 1
64
+ )
65
+
66
+
67
+ history = output.history
68
+ for i, h in enumerate(history):
69
+ print(f"########################")
70
+ time.sleep(0.2)
71
+ print(tokenizer.decode(h.tolist()), end="\n\n")
72
+ ```
73
+
74
+ ## Parameters
75
+ - `input_ids`: The input token ids.
76
+ - `max_new_tokens`: The maximum tokens to generate. Note that the context length (input+output) of Dream currently is 2048. And the mask added to the prompt is counted as new tokens. Therefore, `max_new_tokens` can not be set to a value smaller than the number of mask tokens in the prompt. For example, if you set `number_of_mask` to 4, then `max_new_tokens` should be at least 4.
77
+ - `output_history`: Whether to return the output at each intermediate step.
78
+ - `return_dict_in_generate`: The output format, mostly set to True.
79
+ - `number_of_transfer_tokens`: The number of tokens to predict at each denoising step. We mainly test our model with `number_of_transfer_tokens` set to 1. Other settings are not fully tested.
80
+ - `temperature`: The value used to module the next token probabilities. By default 0.0. The smaller the value, the more accurate the results (e.g., in math or coding). The larger the value, the more diverse the results (e.g., in general conversation). If you notice repeated results, you might consider increasing the temperature.
81
+ - `top_p`: If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. By default None. Control the diversity of generation.
82
+ - `top_k`: The number of highest probability vocabulary tokens to keep for top-k-filtering. By default None. Control the diversity of generation.
83
+ - `alg`: The remasking strategy in diffusion sampling, controlling the token generation order. Support one random strategy and three confidence-based strategies:
84
+ - `maskgit_plus`: Token will be generated based on the top1 confidence from https://arxiv.org/abs/2202.04200.
85
+ - `topk_margin`: Token will be generated based on the margin confidence by taking `top1 - top2` from https://arxiv.org/abs/2502.06768.
86
+ - `entropy`: Token will be generated based on the entropy of each token distribution.
87
+ - `alg_temp`: Add some randomness to `alg` when using confidence-based strategies. By default None.
88
+
89
+ Note: We currently do not support attention mask, as we recompute attention mask each denoising step to support variable-length generation.