Transformer XL¶
Overview¶
The Transformer-XL model was proposed in Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. It’s a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied).
The abstract from the paper is the following:
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens.
Tips:
- Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left. 
- Transformer-XL is one of the few models that has no sequence length limit. 
The original code can be found here.
TransfoXLConfig¶
- 
class transformers.TransfoXLConfig(vocab_size=267735, cutoffs=[20000, 40000, 200000], d_model=1024, d_embed=1024, n_head=16, d_head=64, d_inner=4096, div_val=4, pre_lnorm=False, n_layer=18, mem_len=1600, clamp_len=1000, same_length=True, proj_share_all_but_first=True, attn_type=0, sample_softmax=- 1, adaptive=True, dropout=0.1, dropatt=0.0, untie_r=True, init='normal', init_range=0.01, proj_init_std=0.01, init_std=0.02, layer_norm_epsilon=1e-05, eos_token_id=0, **kwargs)[source]¶
- This is the configuration class to store the configuration of a - TransfoXLModelor a- TFTransfoXLModel. It is used to instantiate a Transformer-XL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Transformer XL architecture.- Configuration objects inherit from - PretrainedConfigand can be used to control the model outputs. Read the documentation from- PretrainedConfigfor more information.- Parameters
- vocab_size ( - int, optional, defaults to 267735) – Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the- inputs_idspassed when calling- TransfoXLModelor- TFTransfoXLModel.
- cutoffs ( - List[int], optional, defaults to- [20000, 40000, 200000]) – Cutoffs for the adaptive softmax.
- d_model ( - int, optional, defaults to 1024) – Dimensionality of the model’s hidden states.
- d_embed ( - int, optional, defaults to 1024) – Dimensionality of the embeddings
- n_head ( - int, optional, defaults to 16) – Number of attention heads for each attention layer in the Transformer encoder.
- d_head ( - int, optional, defaults to 64) – Dimensionality of the model’s heads.
- d_inner ( - int, optional, defaults to 4096) – Inner dimension in FF
- div_val ( - int, optional, defaults to 4) – Divident value for adapative input and softmax
- pre_lnorm ( - boolean, optional, defaults to- False) – Whether or not to apply LayerNorm to the input instead of the output in the blocks.
- n_layer ( - int, optional, defaults to 18) – Number of hidden layers in the Transformer encoder.
- mem_len ( - int, optional, defaults to 1600) – Length of the retained previous heads.
- clamp_len ( - int, optional, defaults to 1000) – Use the same pos embeddings after clamp_len.
- same_length ( - boolean, optional, defaults to- True) – Whether or not to use the same attn length for all tokens
- proj_share_all_but_first ( - boolean, optional, defaults to- True) – True to share all but first projs, False not to share.
- attn_type ( - int, optional, defaults to 0) – Attention type. 0 for Transformer-XL, 1 for Shaw et al, 2 for Vaswani et al, 3 for Al Rfou et al.
- sample_softmax ( - int, optional, defaults to -1) – Number of samples in the sampled softmax.
- adaptive ( - boolean, optional, defaults to- True) – Whether or not to use adaptive softmax.
- dropout ( - float, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
- dropatt ( - float, optional, defaults to 0) – The dropout ratio for the attention probabilities.
- untie_r ( - boolean, optional, defaults to- True) – Whether ot not to untie relative position biases.
- init ( - str, optional, defaults to- "normal") – Parameter initializer to use.
- init_range ( - float, optional, defaults to 0.01) – Parameters initialized by U(-init_range, init_range).
- proj_init_std ( - float, optional, defaults to 0.01) – Parameters initialized by N(0, init_std)
- init_std ( - float, optional, defaults to 0.02) – Parameters initialized by N(0, init_std)
- layer_norm_epsilon ( - float, optional, defaults to 1e-5) – The epsilon to use in the layer normalization layers
 
 - Examples: - >>> from transformers import TransfoXLConfig, TransfoXLModel >>> # Initializing a Transformer XL configuration >>> configuration = TransfoXLConfig() >>> # Initializing a model from the configuration >>> model = TransfoXLModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config 
TransfoXLTokenizer¶
- 
class transformers.TransfoXLTokenizer(special=None, min_freq=0, max_size=None, lower_case=False, delimiter=None, vocab_file=None, pretrained_vocab_file=None, never_split=None, unk_token='<unk>', eos_token='<eos>', additional_special_tokens=['<formula>'], language='en', **kwargs)[source]¶
- Construct a Transformer-XL tokenizer adapted from Vocab class in the original code. The Transformer-XL tokenizer is a word-level tokenizer (no sub-word tokenization). - This tokenizer inherits from - PreTrainedTokenizerwhich contains most of the main methods. Users should refer to this superclass for more information regarding those methods.- Parameters
- special ( - List[str], optional) – A list of special tokens (to be treated by the original implementation of this tokenizer).
- min_freq ( - int, optional, defaults to 0) – The minimum number of times a token has to be present in order to be kept in the vocabulary (otherwise it will be mapped to- unk_token).
- max_size ( - int, optional) – The maximum size of the vocabulary. If left unset, it will default to the size of the vocabulary found after excluding the tokens according to the- min_freqrule.
- lower_case ( - bool, optional, defaults to- False) – Whether or not to lowercase the input when tokenizing.
- delimiter ( - str, optional) – The delimiter used btween tokens.
- vocab_file ( - str, optional) – File containing the vocabulary (from the original implementation).
- pretrained_vocab_file ( - str, optional) – File containing the vocabulary as saved with the- save_pretrained()method.
- never_split (xxx, optional) – Fill me with intesting stuff. 
- unk_token ( - str, optional, defaults to- "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
- eos_token ( - str, optional, defaults to- "<eos>") – The end of sequence token.
- additional_special_tokens ( - List[str], optional, defaults to- ["<formula>"]) – A list of additional special tokens (for the HuggingFace functionality).
- language ( - str, optional, defaults to- "en") – The language of this tokenizer (used for mose preprocessing).
 
 
TransfoXLTokenizerFast¶
- 
class transformers.TransfoXLTokenizerFast(special=None, min_freq=0, max_size=None, lower_case=False, delimiter=None, vocab_file=None, pretrained_vocab_file=None, never_split=None, unk_token='<unk>', eos_token='<eos>', additional_special_tokens=['<formula>'], add_eos=False, add_double_eos=False, normalization=None, **kwargs)[source]¶
- Construct a “fast” Transformer-XL tokenizer (backed by HuggingFace’s tokenizers library) adapted from Vocab class in the original code. The Transformer-XL tokenizer is a word-level tokenizer (no sub-word tokenization). - This tokenizer inherits from - PreTrainedTokenizerFastwhich contains most of the main methods. Users should refer to this superclass for more information regarding those methods.- Parameters
- special ( - List[str], optional) – A list of special tokens (to be treated by the original implementation of this tokenizer).
- min_freq ( - int, optional, defaults to 0) – The minimum number of times a token has to be present in order to be kept in the vocabulary (otherwise it will be mapped to- unk_token).
- max_size ( - int, optional) – The maximum size of the vocabulary. If left unset, it will default to the size of the vocabulary found after excluding the tokens according to the- min_freqrule.
- lower_case ( - bool, optional, defaults to- False) – Whether or not to lowercase the input when tokenizing.
- delimiter ( - str, optional) – The delimiter used btween tokens.
- vocab_file ( - str, optional) – File containing the vocabulary (from the original implementation).
- pretrained_vocab_file ( - str, optional) – File containing the vocabulary as saved with the- save_pretrained()method.
- never_split (xxx, optional) – Fill me with intesting stuff. 
- unk_token ( - str, optional, defaults to- "<unk>") – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
- eos_token ( - str, optional, defaults to- "<eos>") – The end of sequence token.
- additional_special_tokens ( - List[str], optional, defaults to- ["<formula>"]) – A list of additional special tokens (for the HuggingFace functionality).
- add_eos ( - bool, optional, defaults to- False) – Whether or not to add the end-of-sentence token.
- add_double_eos ( - bool, optional, defaults to- False) – Whether or not to add the end-of-sentence token.
- normalization (xxx, optional) – Fill me with intesting stuff. 
 
 - 
save_pretrained(save_directory)[source]¶
- Save the tokenizer vocabulary files together with: - added tokens, 
- special tokens to class attributes mapping, 
- tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert). 
 - This method make sure the full tokenizer can then be re-loaded using the - from_pretrained()class method.- Warning - This won’t save modifications you may have applied to the tokenizer after the instantiation (for instance, modifying - tokenizer.do_lower_caseafter creation).- Parameters
- save_directory ( - str) – The path to adirectory where the tokenizer will be saved.
- Returns
- The files saved. 
- Return type
- A tuple of - str
 
 
TransfoXL specific outputs¶
- 
class transformers.modeling_transfo_xl.TransfoXLModelOutput(last_hidden_state: torch.FloatTensor, mems: List[torch.FloatTensor] = None, hidden_states: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[torch.FloatTensor]] = None)[source]¶
- Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). - Parameters
- last_hidden_state ( - torch.FloatTensorof shape- (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.
- mems ( - List[torch.FloatTensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(torch.FloatTensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) –- Tuple of - torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(torch.FloatTensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) –- Tuple of - torch.FloatTensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
 
- 
class transformers.modeling_transfo_xl.TransfoXLLMHeadModelOutput(losses: Optional[torch.FloatTensor] = None, prediction_scores: torch.FloatTensor = None, mems: List[torch.FloatTensor] = None, hidden_states: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[torch.FloatTensor]] = None)[source]¶
- Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). - Parameters
- losses ( - torch.FloatTensorof shape (batch_size, sequence_length-1), optional, returned when- labelsis provided) – Language modeling losses (not reduced).
- prediction_scores ( - torch.FloatTensorof shape- (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
- mems ( - List[torch.FloatTensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(torch.FloatTensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) –- Tuple of - torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(torch.FloatTensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) –- Tuple of - torch.FloatTensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
 
- 
class transformers.modeling_tf_transfo_xl.TFTransfoXLModelOutput(last_hidden_state: tensorflow.python.framework.ops.Tensor = None, mems: List[tensorflow.python.framework.ops.Tensor] = None, hidden_states: Optional[Tuple[tensorflow.python.framework.ops.Tensor]] = None, attentions: Optional[Tuple[tensorflow.python.framework.ops.Tensor]] = None)[source]¶
- Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). - Parameters
- last_hidden_state ( - tf.Tensorof shape- (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.
- mems ( - List[tf.Tensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(tf.Tensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) –- Tuple of - tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(tf.Tensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) –- Tuple of - tf.Tensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
 
- 
class transformers.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput(prediction_scores: tensorflow.python.framework.ops.Tensor = None, mems: List[tensorflow.python.framework.ops.Tensor] = None, hidden_states: Optional[Tuple[tensorflow.python.framework.ops.Tensor]] = None, attentions: Optional[Tuple[tensorflow.python.framework.ops.Tensor]] = None)[source]¶
- Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). - Parameters
- losses ( - tf.Tensorof shape (batch_size, sequence_length-1), optional, returned when- labelsis provided) – Language modeling losses (not reduced).
- prediction_scores ( - tf.Tensorof shape- (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
- mems ( - List[tf.Tensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(tf.Tensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) –- Tuple of - tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(tf.Tensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) –- Tuple of - tf.Tensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
 
TransfoXLModel¶
- 
class transformers.TransfoXLModel(config)[source]¶
- The bare Bert Model transformer outputting raw hidden-states without any specific head on top. - This model inherits from - PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)- This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. - Parameters
- config ( - TransfoXLConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the- from_pretrained()method to load the model weights.
 - 
forward(input_ids=None, mems=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶
- The - TransfoXLModelforward method, overrides the- __call__()special method.- Note - Although the recipe for forward pass needs to be defined within this function, one should call the - Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
- input_ids ( - torch.LongTensorof shape- (batch_size, sequence_length)) –- Indices of input sequence tokens in the vocabulary. - Indices can be obtained using - TransfoXLTokenizer. See- transformers.PreTrainedTokenizer.encode()and- transformers.PreTrainedTokenizer.__call__()for details.
- mems ( - List[torch.FloatTensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see- memsoutput below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as- input_idsas they have already been computed.
- head_mask ( - torch.FloatTensorof shape- (num_heads,)or- (num_layers, num_heads), optional) –- Mask to nullify selected heads of the self-attention modules. Mask values selected in - [0, 1]:- 1 indicates the head is not masked, 
- 0 indicates the head is masked. 
 
- inputs_embeds ( - torch.FloatTensorof shape- (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing- input_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert- input_idsindices into associated vectors than the model’s internal embedding lookup matrix.
- output_attentions ( - bool, optional) – Whether or not to return the attentions tensors of all attention layers. See- attentionsunder returned tensors for more detail.
- output_hidden_states ( - bool, optional) – Whether or not to return the hidden states of all layers. See- hidden_statesunder returned tensors for more detail.
- return_dict ( - bool, optional) – Whether or not to return a- ModelOutputinstead of a plain tuple.
 
- Returns
- A - TransfoXLModelOutput(if- return_dict=Trueis passed or when- config.return_dict=True) or a tuple of- torch.FloatTensorcomprising various elements depending on the configuration (- TransfoXLConfig) and inputs.- last_hidden_state ( - torch.FloatTensorof shape- (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.
- mems ( - List[torch.FloatTensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(torch.FloatTensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) – Tuple of- torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(torch.FloatTensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) – Tuple of- torch.FloatTensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
- Return type
- TransfoXLModelOutputor- tuple(torch.FloatTensor)
 - Example: - >>> from transformers import TransfoXLTokenizer, TransfoXLModel >>> import torch >>> tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') >>> model = TransfoXLModel.from_pretrained('transfo-xl-wt103', return_dict=True) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state 
 
TransfoXLLMHeadModel¶
- 
class transformers.TransfoXLLMHeadModel(config)[source]¶
- The Transformer-XL Model with a language modeling head on top (adaptive softmax with weights tied to the adaptive input embeddings) - This model inherits from - PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)- This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. - Parameters
- config ( - TransfoXLConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the- from_pretrained()method to load the model weights.
 - 
forward(input_ids=None, mems=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶
- The - TransfoXLLMHeadModelforward method, overrides the- __call__()special method.- Note - Although the recipe for forward pass needs to be defined within this function, one should call the - Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
- input_ids ( - torch.LongTensorof shape- (batch_size, sequence_length)) –- Indices of input sequence tokens in the vocabulary. - Indices can be obtained using - TransfoXLTokenizer. See- transformers.PreTrainedTokenizer.encode()and- transformers.PreTrainedTokenizer.__call__()for details.
- mems ( - List[torch.FloatTensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see- memsoutput below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as- input_idsas they have already been computed.
- head_mask ( - torch.FloatTensorof shape- (num_heads,)or- (num_layers, num_heads), optional) –- Mask to nullify selected heads of the self-attention modules. Mask values selected in - [0, 1]:- 1 indicates the head is not masked, 
- 0 indicates the head is masked. 
 
- inputs_embeds ( - torch.FloatTensorof shape- (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing- input_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert- input_idsindices into associated vectors than the model’s internal embedding lookup matrix.
- output_attentions ( - bool, optional) – Whether or not to return the attentions tensors of all attention layers. See- attentionsunder returned tensors for more detail.
- output_hidden_states ( - bool, optional) – Whether or not to return the hidden states of all layers. See- hidden_statesunder returned tensors for more detail.
- return_dict ( - bool, optional) – Whether or not to return a- ModelOutputinstead of a plain tuple.
- labels ( - torch.LongTensorof shape- (batch_size, sequence_length), optional) – Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set- labels = input_idsIndices are selected in- [-100, 0, ..., config.vocab_size]All labels set to- -100are ignored (masked), the loss is only computed for labels in- [0, ..., config.vocab_size]
 
- Returns
- A - TransfoXLLMHeadModelOutput(if- return_dict=Trueis passed or when- config.return_dict=True) or a tuple of- torch.FloatTensorcomprising various elements depending on the configuration (- TransfoXLConfig) and inputs.- losses ( - torch.FloatTensorof shape (batch_size, sequence_length-1), optional, returned when- labelsis provided) Language modeling losses (not reduced).
- prediction_scores ( - torch.FloatTensorof shape- (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
- mems ( - List[torch.FloatTensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(torch.FloatTensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) – Tuple of- torch.FloatTensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(torch.FloatTensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) – Tuple of- torch.FloatTensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
- Return type
- TransfoXLLMHeadModelOutputor- tuple(torch.FloatTensor)
 - Example: - >>> import torch >>> from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel >>> tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') >>> model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103', return_dict=True) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits 
 
TFTransfoXLModel¶
- 
class transformers.TFTransfoXLModel(*args, **kwargs)[source]¶
- The bare Bert Model transformer outputing raw hidden-states without any specific head on top. - This model inherits from - TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)- This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. - Note - TF 2.0 models accepts two formats as inputs: - having all inputs as keyword arguments (like PyTorch models), or 
- having all inputs as a list, tuple or dict in the first positional arguments. 
 - This second option is useful when using - tf.keras.Model.fit()method which currently requires having all the tensors in the first argument of the model call function:- model(inputs).- If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : - a single Tensor with - input_idsonly and nothing else:- model(inputs_ids)
- a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - model([input_ids, attention_mask])or- model([input_ids, attention_mask, token_type_ids])
- a dictionary with one or several input Tensors associated to the input names given in the docstring: - model({"input_ids": input_ids, "token_type_ids": token_type_ids})
 - Parameters
- config ( - TransfoXLConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the- from_pretrained()method to load the model weights.
 - 
call(inputs, **kwargs)[source]¶
- The - TFTransfoXLModelforward method, overrides the- __call__()special method.- Note - Although the recipe for forward pass needs to be defined within this function, one should call the - Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
- input_ids ( - tf.Tensoror- Numpy arrayof shape- (batch_size, sequence_length)) –- Indices of input sequence tokens in the vocabulary. - Indices can be obtained using - BertTokenizer. See- transformers.PreTrainedTokenizer.__call__()and- transformers.PreTrainedTokenizer.encode()for details.
- mems ( - List[tf.Tensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see- memsoutput below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as- input_idsas they have already been computed.
- head_mask ( - tf.Tensoror- Numpy arrayof shape- (num_heads,)or- (num_layers, num_heads), optional) –- Mask to nullify selected heads of the self-attention modules. Mask values selected in - [0, 1]:- 1 indicates the head is not masked, 
- 0 indicates the head is masked. 
 
- inputs_embeds ( - tf.Tensoror- Numpy arrayof shape- (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing- input_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert- input_idsindices into associated vectors than the model’s internal embedding lookup matrix.
- output_attentions ( - bool, optional) – Whether or not to return the attentions tensors of all attention layers. See- attentionsunder returned tensors for more detail.
- output_hidden_states ( - bool, optional) – Whether or not to return the hidden states of all layers. See- hidden_statesunder returned tensors for more detail.
- return_dict ( - bool, optional) – Whether or not to return a- ModelOutputinstead of a plain tuple.
- training ( - bool, optional, defaults to- False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
 
- Returns
- A - TFTransfoXLModelOutput(if- return_dict=Trueis passed or when- config.return_dict=True) or a tuple of- tf.Tensorcomprising various elements depending on the configuration (- TransfoXLConfig) and inputs.- last_hidden_state ( - tf.Tensorof shape- (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.
- mems ( - List[tf.Tensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(tf.Tensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) – Tuple of- tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(tf.Tensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) – Tuple of- tf.Tensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
- Return type
- TFTransfoXLModelOutputor- tuple(tf.Tensor)
 - Example: - >>> from transformers import TransfoXLTokenizer, TFTransfoXLModel >>> import tensorflow as tf >>> tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') >>> model = TFTransfoXLModel.from_pretrained('transfo-xl-wt103') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple 
 
TFTransfoXLLMHeadModel¶
- 
class transformers.TFTransfoXLLMHeadModel(*args, **kwargs)[source]¶
- The Transformer-XL Model with a language modeling head on top (adaptive softmax with weights tied to the adaptive input embeddings) - This model inherits from - TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)- This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. - Note - TF 2.0 models accepts two formats as inputs: - having all inputs as keyword arguments (like PyTorch models), or 
- having all inputs as a list, tuple or dict in the first positional arguments. 
 - This second option is useful when using - tf.keras.Model.fit()method which currently requires having all the tensors in the first argument of the model call function:- model(inputs).- If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : - a single Tensor with - input_idsonly and nothing else:- model(inputs_ids)
- a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - model([input_ids, attention_mask])or- model([input_ids, attention_mask, token_type_ids])
- a dictionary with one or several input Tensors associated to the input names given in the docstring: - model({"input_ids": input_ids, "token_type_ids": token_type_ids})
 - Parameters
- config ( - TransfoXLConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the- from_pretrained()method to load the model weights.
 - 
call(inputs, mems=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False)[source]¶
- The - TFTransfoXLLMHeadModelforward method, overrides the- __call__()special method.- Note - Although the recipe for forward pass needs to be defined within this function, one should call the - Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
- input_ids ( - tf.Tensoror- Numpy arrayof shape- (batch_size, sequence_length)) –- Indices of input sequence tokens in the vocabulary. - Indices can be obtained using - BertTokenizer. See- transformers.PreTrainedTokenizer.__call__()and- transformers.PreTrainedTokenizer.encode()for details.
- mems ( - List[tf.Tensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see- memsoutput below). Can be used to speed up sequential decoding. The token ids which have their mems given to this model should not be passed as- input_idsas they have already been computed.
- head_mask ( - tf.Tensoror- Numpy arrayof shape- (num_heads,)or- (num_layers, num_heads), optional) –- Mask to nullify selected heads of the self-attention modules. Mask values selected in - [0, 1]:- 1 indicates the head is not masked, 
- 0 indicates the head is masked. 
 
- inputs_embeds ( - tf.Tensoror- Numpy arrayof shape- (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing- input_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convert- input_idsindices into associated vectors than the model’s internal embedding lookup matrix.
- output_attentions ( - bool, optional) – Whether or not to return the attentions tensors of all attention layers. See- attentionsunder returned tensors for more detail.
- output_hidden_states ( - bool, optional) – Whether or not to return the hidden states of all layers. See- hidden_statesunder returned tensors for more detail.
- return_dict ( - bool, optional) – Whether or not to return a- ModelOutputinstead of a plain tuple.
- training ( - bool, optional, defaults to- False) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
 
- Returns
- A - TFTransfoXLLMHeadModelOutput(if- return_dict=Trueis passed or when- config.return_dict=True) or a tuple of- tf.Tensorcomprising various elements depending on the configuration (- TransfoXLConfig) and inputs.- losses ( - tf.Tensorof shape (batch_size, sequence_length-1), optional, returned when- labelsis provided) Language modeling losses (not reduced).
- prediction_scores ( - tf.Tensorof shape- (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
- mems ( - List[tf.Tensor]of length- config.n_layers) – Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see- memsinput) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.
- hidden_states ( - tuple(tf.Tensor), optional, returned when- output_hidden_states=Trueis passed or when- config.output_hidden_states=True) – Tuple of- tf.Tensor(one for the output of the embeddings + one for the output of each layer) of shape- (batch_size, sequence_length, hidden_size).- Hidden-states of the model at the output of each layer plus the initial embedding outputs. 
- attentions ( - tuple(tf.Tensor), optional, returned when- output_attentions=Trueis passed or when- config.output_attentions=True) – Tuple of- tf.Tensor(one for each layer) of shape- (batch_size, num_heads, sequence_length, sequence_length).- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. 
 
- Return type
- TFTransfoXLLMHeadModelOutputor- tuple(tf.Tensor)
 - Example: - >>> from transformers import TransfoXLTokenizer, TFTransfoXLLMHeadModel >>> import tensorflow as tf >>> tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') >>> model = TFTransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs[0]