jemartin commited on
Commit
69ee9d2
·
verified ·
1 Parent(s): 5196ebb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +138 -0
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ model_name: gpt2-10.onnx
5
+ tags:
6
+ - validated
7
+ - text
8
+ - machine_comprehension
9
+ - gpt-2
10
+ ---
11
+ <!--- SPDX-License-Identifier: Apache-2.0 -->
12
+
13
+ # GPT-2
14
+
15
+ ## Use-cases
16
+ Transformer-based language model for text generation.
17
+
18
+ ## Description
19
+ [GPT-2](https://openai.com/blog/better-language-models/) is a large transformer-based language model with a simple objective: predict the next word, given all of the previous words within some text.
20
+
21
+ ## Model
22
+
23
+ |Model |Download | Download (with sample test data)|ONNX version|Opset version|Accuracy |
24
+ |-------------|:--------------|:--------------|:--------------|:--------------|:--------------|
25
+ |GPT-2 |[522.81 MB](model/gpt2-10.onnx) | [438.3 MB](model/gpt2-10.tar.gz)| 1.6 | 10 |mAP of [0.024](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit#gid=0)|
26
+ |GPT-2-LM-HEAD |[664.87 MB](model/gpt2-lm-head-10.onnx) | [607 MB](model/gpt2-lm-head-10.tar.gz)| 1.6 | 10 |mAP of [0.024](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit#gid=0)|
27
+
28
+
29
+ ### Source
30
+ PyTorch GPT-2 ==> ONNX GPT-2
31
+ PyTorch GPT-2 + script changes ==> ONNX GPT-2-LM-HEAD
32
+
33
+
34
+ ## Inference
35
+ The script for ONNX model conversion and ONNX Runtime inference is [here](dependencies/GPT2-export.py).
36
+
37
+ ### Input to model
38
+ Sequence of words as a string. Example: "Here is some text to encode : Hello World", tokenized by Byte-Pair-Encoding.
39
+ **input_ids**: Indices of input tokens in the vocabulary. It's a long tensor of dynamic shape (batch_size, sequence_length).
40
+
41
+
42
+
43
+ ### Preprocessing steps
44
+ Use ```tokenizer.encode()``` to encode the input text:
45
+ ```python
46
+ text = "Here is some text to encode : Hello World"
47
+ tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
48
+ tokens_tensor = torch.tensor([torch.tensor(tokenizer.encode(text))])
49
+ ```
50
+
51
+ ### Output of model
52
+ For GPT-2 model:
53
+
54
+ **last_hidden_state**: Sequence of hidden-states at the last layer of the model. It's a float tensor of size (batch_size, sequence_length, hidden_size).
55
+ **past**: pre-computed hidden-states. It's a list of tensors (key and values in the attention blocks) of size (batch_size, num_heads, sequence_length, sequence_length), one per each layer.
56
+
57
+ Output of this model is the tuple (last_hidden_state, past)
58
+
59
+ For GPT-2-LM-HEAD model:
60
+
61
+ **prediction_scores**: Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). It's a float tensor of size (batch_size, sequence_length, vocab_size).
62
+ **past**: pre-computed hidden-states. It's a list of tensors (key and values in the attention blocks) of size (batch_size, num_heads, sequence_length, sequence_length), one per each layer.
63
+
64
+ Output of this model is the tuple (prediction_scores, past)
65
+
66
+ Note that output_hidden_states=False and output_attentions=False in the PretrainedConfig configs.
67
+
68
+ ### Postprocessing steps
69
+ For GPT-2 model:
70
+
71
+ ```python
72
+ outputs = model(input_ids)
73
+ last_hidden_states = outputs[0]
74
+ ```
75
+
76
+ For GPT-2-LM-HEAD model, to generate next 10 words:
77
+ ```
78
+ import numpy as np
79
+ import torch
80
+ import torch.nn.functional as F
81
+ from transformers import GPT2Tokenizer
82
+
83
+ batch_size = 1
84
+ length = 10
85
+ device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
86
+ tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
87
+
88
+ text = "Here is some text to encode : Hello World!"
89
+ tokens = np.array(tokenizer.encode(text))
90
+ context = torch.tensor(tokens, device=device, dtype=torch.long).unsqueeze(0).repeat(batch_size, 1)
91
+ prev = context
92
+ output = context
93
+
94
+ for i in range(length):
95
+ outputs = model(prev)
96
+ logits = outputs[0]
97
+ logits = logits[:, -1, :]
98
+ log_probs = F.softmax(logits, dim=-1)
99
+ _, prev = torch.topk(log_probs, k=1, dim=-1)
100
+ output = torch.cat((output, prev), dim=1)
101
+
102
+ output = output[:, len(tokens):].tolist()
103
+ generated = 0
104
+ for i in range(batch_size):
105
+ generated += 1
106
+ text = tokenizer.decode(output[i])
107
+ print(text)
108
+ ```
109
+ <hr>
110
+
111
+ ## Dataset (Train and validation)
112
+ The original model from OpenAI is pretrained on a dataset of [8 million web pages](https://openai.com/blog/better-language-models).
113
+ The pretrained model is referenced in [huggingface/transformers](https://github.com/huggingface/transformers/blob/master/transformers/modeling_gpt2.py) repository as a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~40 GB of text data.
114
+ https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin
115
+
116
+ <hr>
117
+
118
+ ## Validation accuracy
119
+ Metric and benchmarking details are provided by HuggingFace in this [post](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2).
120
+ <hr>
121
+
122
+
123
+ ## Publication/Attribution
124
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, andIlya Sutskever. Language Models are Unsupervised Multitask Learners. 2019.
125
+
126
+ ## References
127
+ This model is converted directly from [huggingface/transformers](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py).
128
+ <hr>
129
+
130
+ ## Contributors
131
+ Negin Raoof
132
+ Joddiy Zhang
133
+ <hr>
134
+
135
+ ## License
136
+ Apache 2.0 License
137
+ <hr>
138
+