LeroyDyer commited on
Commit
54fef9b
1 Parent(s): c64ce72

Upload configuration_utils.py

Browse files
Files changed (1) hide show
  1. configuration_utils.py +1133 -0
configuration_utils.py ADDED
@@ -0,0 +1,1133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
3
+ # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """ Configuration base class and utilities."""
17
+
18
+
19
+ import copy
20
+ import json
21
+ import os
22
+ import re
23
+ import warnings
24
+ from typing import Any, Dict, List, Optional, Tuple, Union
25
+
26
+ from packaging import version
27
+
28
+ from . import __version__
29
+ from .dynamic_module_utils import custom_object_save
30
+ from .utils import (
31
+ CONFIG_NAME,
32
+ PushToHubMixin,
33
+ add_model_info_to_auto_map,
34
+ cached_file,
35
+ copy_func,
36
+ download_url,
37
+ extract_commit_hash,
38
+ is_remote_url,
39
+ is_torch_available,
40
+ logging,
41
+ )
42
+
43
+
44
+ logger = logging.get_logger(__name__)
45
+
46
+ _re_configuration_file = re.compile(r"config\.(.*)\.json")
47
+
48
+
49
+ class PretrainedConfig(PushToHubMixin):
50
+ # no-format
51
+ r"""
52
+ Base class for all configuration classes. Handles a few parameters common to all models' configurations as well as
53
+ methods for loading/downloading/saving configurations.
54
+
55
+ <Tip>
56
+
57
+ A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to
58
+ initialize a model does **not** load the model weights. It only affects the model's configuration.
59
+
60
+ </Tip>
61
+
62
+ Class attributes (overridden by derived classes):
63
+
64
+ - **model_type** (`str`) -- An identifier for the model type, serialized into the JSON file, and used to recreate
65
+ the correct object in [`~transformers.AutoConfig`].
66
+ - **is_composition** (`bool`) -- Whether the config class is composed of multiple sub-configs. In this case the
67
+ config has to be initialized from two or more configs of type [`~transformers.PretrainedConfig`] like:
68
+ [`~transformers.EncoderDecoderConfig`] or [`~RagConfig`].
69
+ - **keys_to_ignore_at_inference** (`List[str]`) -- A list of keys to ignore by default when looking at dictionary
70
+ outputs of the model during inference.
71
+ - **attribute_map** (`Dict[str, str]`) -- A dict that maps model specific attribute names to the standardized
72
+ naming of attributes.
73
+
74
+ Common attributes (present in all subclasses):
75
+
76
+ - **vocab_size** (`int`) -- The number of tokens in the vocabulary, which is also the first dimension of the
77
+ embeddings matrix (this attribute may be missing for models that don't have a text modality like ViT).
78
+ - **hidden_size** (`int`) -- The hidden size of the model.
79
+ - **num_attention_heads** (`int`) -- The number of attention heads used in the multi-head attention layers of the
80
+ model.
81
+ - **num_hidden_layers** (`int`) -- The number of blocks in the model.
82
+
83
+ Arg:
84
+ name_or_path (`str`, *optional*, defaults to `""`):
85
+ Store the string that was passed to [`PreTrainedModel.from_pretrained`] or
86
+ [`TFPreTrainedModel.from_pretrained`] as `pretrained_model_name_or_path` if the configuration was created
87
+ with such a method.
88
+ output_hidden_states (`bool`, *optional*, defaults to `False`):
89
+ Whether or not the model should return all hidden-states.
90
+ output_attentions (`bool`, *optional*, defaults to `False`):
91
+ Whether or not the model should returns all attentions.
92
+ return_dict (`bool`, *optional*, defaults to `True`):
93
+ Whether or not the model should return a [`~transformers.utils.ModelOutput`] instead of a plain tuple.
94
+ is_encoder_decoder (`bool`, *optional*, defaults to `False`):
95
+ Whether the model is used as an encoder/decoder or not.
96
+ is_decoder (`bool`, *optional*, defaults to `False`):
97
+ Whether the model is used as decoder or not (in which case it's used as an encoder).
98
+ cross_attention_hidden_size** (`bool`, *optional*):
99
+ The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder
100
+ setting and the cross-attention hidden dimension differs from `self.config.hidden_size`.
101
+ add_cross_attention (`bool`, *optional*, defaults to `False`):
102
+ Whether cross-attention layers should be added to the model. Note, this option is only relevant for models
103
+ that can be used as decoder models within the [`EncoderDecoderModel`] class, which consists of all models
104
+ in `AUTO_MODELS_FOR_CAUSAL_LM`.
105
+ tie_encoder_decoder (`bool`, *optional*, defaults to `False`):
106
+ Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder
107
+ and decoder model to have the exact same parameter names.
108
+ prune_heads (`Dict[int, List[int]]`, *optional*, defaults to `{}`):
109
+ Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of
110
+ heads to prune in said layer.
111
+
112
+ For instance `{1: [0, 2], 2: [2, 3]}` will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.
113
+ chunk_size_feed_forward (`int`, *optional*, defaults to `0`):
114
+ The chunk size of all feed forward layers in the residual attention blocks. A chunk size of `0` means that
115
+ the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes `n` <
116
+ sequence_length embeddings at a time. For more information on feed forward chunking, see [How does Feed
117
+ Forward Chunking work?](../glossary.html#feed-forward-chunking).
118
+
119
+ > Parameters for sequence generation
120
+
121
+ max_length (`int`, *optional*, defaults to 20):
122
+ Maximum length that will be used by default in the `generate` method of the model.
123
+ min_length (`int`, *optional*, defaults to 0):
124
+ Minimum length that will be used by default in the `generate` method of the model.
125
+ do_sample (`bool`, *optional*, defaults to `False`):
126
+ Flag that will be used by default in the `generate` method of the model. Whether or not to use sampling ;
127
+ use greedy decoding otherwise.
128
+ early_stopping (`bool`, *optional*, defaults to `False`):
129
+ Flag that will be used by default in the `generate` method of the model. Whether to stop the beam search
130
+ when at least `num_beams` sentences are finished per batch or not.
131
+ num_beams (`int`, *optional*, defaults to 1):
132
+ Number of beams for beam search that will be used by default in the `generate` method of the model. 1 means
133
+ no beam search.
134
+ num_beam_groups (`int`, *optional*, defaults to 1):
135
+ Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams
136
+ that will be used by default in the `generate` method of the model. 1 means no group beam search.
137
+ diversity_penalty (`float`, *optional*, defaults to 0.0):
138
+ Value to control diversity for group beam search. that will be used by default in the `generate` method of
139
+ the model. 0 means no diversity penalty. The higher the penalty, the more diverse are the outputs.
140
+ temperature (`float`, *optional*, defaults to 1.0):
141
+ The value used to module the next token probabilities that will be used by default in the `generate` method
142
+ of the model. Must be strictly positive.
143
+ top_k (`int`, *optional*, defaults to 50):
144
+ Number of highest probability vocabulary tokens to keep for top-k-filtering that will be used by default in
145
+ the `generate` method of the model.
146
+ top_p (`float`, *optional*, defaults to 1):
147
+ Value that will be used by default in the `generate` method of the model for `top_p`. If set to float < 1,
148
+ only the most probable tokens with probabilities that add up to `top_p` or higher are kept for generation.
149
+ typical_p (`float`, *optional*, defaults to 1):
150
+ Local typicality measures how similar the conditional probability of predicting a target token next is to
151
+ the expected conditional probability of predicting a random token next, given the partial text already
152
+ generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that
153
+ add up to `typical_p` or higher are kept for generation. See [this
154
+ paper](https://arxiv.org/pdf/2202.00666.pdf) for more details.
155
+ repetition_penalty (`float`, *optional*, defaults to 1):
156
+ Parameter for repetition penalty that will be used by default in the `generate` method of the model. 1.0
157
+ means no penalty.
158
+ length_penalty (`float`, *optional*, defaults to 1):
159
+ Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to
160
+ the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
161
+ likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while
162
+ `length_penalty` < 0.0 encourages shorter sequences.
163
+ no_repeat_ngram_size (`int`, *optional*, defaults to 0) -- Value that will be used by default in the
164
+ `generate` method of the model for `no_repeat_ngram_size`. If set to int > 0, all ngrams of that size can
165
+ only occur once.
166
+ encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0) -- Value that will be used by
167
+ default in the `generate` method of the model for `encoder_no_repeat_ngram_size`. If set to int > 0, all
168
+ ngrams of that size that occur in the `encoder_input_ids` cannot occur in the `decoder_input_ids`.
169
+ bad_words_ids (`List[int]`, *optional*):
170
+ List of token ids that are not allowed to be generated that will be used by default in the `generate`
171
+ method of the model. In order to get the tokens of the words that should not appear in the generated text,
172
+ use `tokenizer.encode(bad_word, add_prefix_space=True)`.
173
+ num_return_sequences (`int`, *optional*, defaults to 1):
174
+ Number of independently computed returned sequences for each element in the batch that will be used by
175
+ default in the `generate` method of the model.
176
+ output_scores (`bool`, *optional*, defaults to `False`):
177
+ Whether the model should return the logits when used for generation.
178
+ return_dict_in_generate (`bool`, *optional*, defaults to `False`):
179
+ Whether the model should return a [`~transformers.utils.ModelOutput`] instead of a `torch.LongTensor`.
180
+ forced_bos_token_id (`int`, *optional*):
181
+ The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful for
182
+ multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be the target
183
+ language token.
184
+ forced_eos_token_id (`int`, *optional*):
185
+ The id of the token to force as the last generated token when `max_length` is reached.
186
+ remove_invalid_values (`bool`, *optional*):
187
+ Whether to remove possible _nan_ and _inf_ outputs of the model to prevent the generation method to crash.
188
+ Note that using `remove_invalid_values` can slow down generation.
189
+
190
+ > Parameters for fine-tuning tasks
191
+
192
+ architectures (`List[str]`, *optional*):
193
+ Model architectures that can be used with the model pretrained weights.
194
+ finetuning_task (`str`, *optional*):
195
+ Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow
196
+ or PyTorch) checkpoint.
197
+ id2label (`Dict[int, str]`, *optional*):
198
+ A map from index (for instance prediction index, or target index) to label.
199
+ label2id (`Dict[str, int]`, *optional*): A map from label to index for the model.
200
+ num_labels (`int`, *optional*):
201
+ Number of labels to use in the last layer added to the model, typically for a classification task.
202
+ task_specific_params (`Dict[str, Any]`, *optional*):
203
+ Additional keyword arguments to store for the current task.
204
+ problem_type (`str`, *optional*):
205
+ Problem type for `XxxForSequenceClassification` models. Can be one of `"regression"`,
206
+ `"single_label_classification"` or `"multi_label_classification"`.
207
+
208
+ > Parameters linked to the tokenizer
209
+
210
+ tokenizer_class (`str`, *optional*):
211
+ The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the
212
+ model by default).
213
+ prefix (`str`, *optional*):
214
+ A specific prompt that should be added at the beginning of each text before calling the model.
215
+ bos_token_id (`int`, *optional*): The id of the _beginning-of-stream_ token.
216
+ pad_token_id (`int`, *optional*): The id of the _padding_ token.
217
+ eos_token_id (`int`, *optional*): The id of the _end-of-stream_ token.
218
+ decoder_start_token_id (`int`, *optional*):
219
+ If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token.
220
+ sep_token_id (`int`, *optional*): The id of the _separation_ token.
221
+
222
+ > PyTorch specific parameters
223
+
224
+ torchscript (`bool`, *optional*, defaults to `False`):
225
+ Whether or not the model should be used with Torchscript.
226
+ tie_word_embeddings (`bool`, *optional*, defaults to `True`):
227
+ Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
228
+ model has a output word embedding layer.
229
+ torch_dtype (`str`, *optional*):
230
+ The `dtype` of the weights. This attribute can be used to initialize the model to a non-default `dtype`
231
+ (which is normally `float32`) and thus allow for optimal storage allocation. For example, if the saved
232
+ model is `float16`, ideally we want to load it back using the minimal amount of memory needed to load
233
+ `float16` weights. Since the config object is stored in plain text, this attribute contains just the
234
+ floating type string without the `torch.` prefix. For example, for `torch.float16` ``torch_dtype` is the
235
+ `"float16"` string.
236
+
237
+ This attribute is currently not being used during model loading time, but this may change in the future
238
+ versions. But we can already start preparing for the future by saving the dtype with save_pretrained.
239
+
240
+ > TensorFlow specific parameters
241
+
242
+ use_bfloat16 (`bool`, *optional*, defaults to `False`):
243
+ Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models).
244
+ tf_legacy_loss (`bool`, *optional*, defaults to `False`):
245
+ Whether the model should use legacy TensorFlow losses. Legacy losses have variable output shapes and may
246
+ not be XLA-compatible. This option is here for backward compatibility and will be removed in Transformers
247
+ v5.
248
+ """
249
+
250
+ model_type: str = ""
251
+ is_composition: bool = False
252
+ attribute_map: Dict[str, str] = {}
253
+ _auto_class: Optional[str] = None
254
+
255
+ def __setattr__(self, key, value):
256
+ if key in super().__getattribute__("attribute_map"):
257
+ key = super().__getattribute__("attribute_map")[key]
258
+ super().__setattr__(key, value)
259
+
260
+ def __getattribute__(self, key):
261
+ if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
262
+ key = super().__getattribute__("attribute_map")[key]
263
+ return super().__getattribute__(key)
264
+
265
+ def __init__(self, **kwargs):
266
+ # Attributes with defaults
267
+ self.return_dict = kwargs.pop("return_dict", True)
268
+ self.output_hidden_states = kwargs.pop("output_hidden_states", False)
269
+ self.output_attentions = kwargs.pop("output_attentions", False)
270
+ self.torchscript = kwargs.pop("torchscript", False) # Only used by PyTorch models
271
+ self.torch_dtype = kwargs.pop("torch_dtype", None) # Only used by PyTorch models
272
+ self.use_bfloat16 = kwargs.pop("use_bfloat16", False)
273
+ self.tf_legacy_loss = kwargs.pop("tf_legacy_loss", False) # Only used by TensorFlow models
274
+ self.pruned_heads = kwargs.pop("pruned_heads", {})
275
+ self.tie_word_embeddings = kwargs.pop(
276
+ "tie_word_embeddings", True
277
+ ) # Whether input and output word embeddings should be tied for all MLM, LM and Seq2Seq models.
278
+ self.chunk_size_feed_forward = kwargs.pop("chunk_size_feed_forward", 0)
279
+
280
+ # Is decoder is used in encoder-decoder models to differentiate encoder from decoder
281
+ self.is_encoder_decoder = kwargs.pop("is_encoder_decoder", False)
282
+ self.is_decoder = kwargs.pop("is_decoder", False)
283
+ self.cross_attention_hidden_size = kwargs.pop("cross_attention_hidden_size", None)
284
+ self.add_cross_attention = kwargs.pop("add_cross_attention", False)
285
+ self.tie_encoder_decoder = kwargs.pop("tie_encoder_decoder", False)
286
+
287
+ # Retrocompatibility: Parameters for sequence generation. While we will keep the ability to load these
288
+ # parameters, saving them will be deprecated. In a distant future, we won't need to load them.
289
+ for parameter_name, default_value in self._get_generation_defaults().items():
290
+ setattr(self, parameter_name, kwargs.pop(parameter_name, default_value))
291
+
292
+ # Fine-tuning task arguments
293
+ self.architectures = kwargs.pop("architectures", None)
294
+ self.finetuning_task = kwargs.pop("finetuning_task", None)
295
+ self.id2label = kwargs.pop("id2label", None)
296
+ self.label2id = kwargs.pop("label2id", None)
297
+ if self.label2id is not None and not isinstance(self.label2id, dict):
298
+ raise ValueError("Argument label2id should be a dictionary.")
299
+ if self.id2label is not None:
300
+ if not isinstance(self.id2label, dict):
301
+ raise ValueError("Argument id2label should be a dictionary.")
302
+ num_labels = kwargs.pop("num_labels", None)
303
+ if num_labels is not None and len(self.id2label) != num_labels:
304
+ logger.warning(
305
+ f"You passed along `num_labels={num_labels}` with an incompatible id to label map: "
306
+ f"{self.id2label}. The number of labels wil be overwritten to {self.num_labels}."
307
+ )
308
+ self.id2label = {int(key): value for key, value in self.id2label.items()}
309
+ # Keys are always strings in JSON so convert ids to int here.
310
+ else:
311
+ self.num_labels = kwargs.pop("num_labels", 2)
312
+
313
+ if self.torch_dtype is not None and isinstance(self.torch_dtype, str):
314
+ # we will start using self.torch_dtype in v5, but to be consistent with
315
+ # from_pretrained's torch_dtype arg convert it to an actual torch.dtype object
316
+ if is_torch_available():
317
+ import torch
318
+
319
+ self.torch_dtype = getattr(torch, self.torch_dtype)
320
+
321
+ # Tokenizer arguments TODO: eventually tokenizer and models should share the same config
322
+ self.tokenizer_class = kwargs.pop("tokenizer_class", None)
323
+ self.prefix = kwargs.pop("prefix", None)
324
+ self.bos_token_id = kwargs.pop("bos_token_id", None)
325
+ self.pad_token_id = kwargs.pop("pad_token_id", None)
326
+ self.eos_token_id = kwargs.pop("eos_token_id", None)
327
+ self.sep_token_id = kwargs.pop("sep_token_id", None)
328
+
329
+ self.decoder_start_token_id = kwargs.pop("decoder_start_token_id", None)
330
+
331
+ # task specific arguments
332
+ self.task_specific_params = kwargs.pop("task_specific_params", None)
333
+
334
+ # regression / multi-label classification
335
+ self.problem_type = kwargs.pop("problem_type", None)
336
+ allowed_problem_types = ("regression", "single_label_classification", "multi_label_classification")
337
+ if self.problem_type is not None and self.problem_type not in allowed_problem_types:
338
+ raise ValueError(
339
+ f"The config parameter `problem_type` was not understood: received {self.problem_type} "
340
+ "but only 'regression', 'single_label_classification' and 'multi_label_classification' are valid."
341
+ )
342
+
343
+ # TPU arguments
344
+ if kwargs.pop("xla_device", None) is not None:
345
+ logger.warning(
346
+ "The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can "
347
+ "safely remove it from your `config.json` file."
348
+ )
349
+
350
+ # Name or path to the pretrained checkpoint
351
+ self._name_or_path = str(kwargs.pop("name_or_path", ""))
352
+ # Config hash
353
+ self._commit_hash = kwargs.pop("_commit_hash", None)
354
+
355
+ # Attention implementation to use, if relevant.
356
+ self._attn_implementation_internal = kwargs.pop("attn_implementation", None)
357
+
358
+ # Drop the transformers version info
359
+ self.transformers_version = kwargs.pop("transformers_version", None)
360
+
361
+ # Deal with gradient checkpointing
362
+ if kwargs.get("gradient_checkpointing", False):
363
+ warnings.warn(
364
+ "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "
365
+ "Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the "
366
+ "`Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`."
367
+ )
368
+
369
+ # Additional attributes without default values
370
+ for key, value in kwargs.items():
371
+ try:
372
+ setattr(self, key, value)
373
+ except AttributeError as err:
374
+ logger.error(f"Can't set {key} with value {value} for {self}")
375
+ raise err
376
+
377
+ @property
378
+ def name_or_path(self) -> str:
379
+ return getattr(self, "_name_or_path", None)
380
+
381
+ @name_or_path.setter
382
+ def name_or_path(self, value):
383
+ self._name_or_path = str(value) # Make sure that name_or_path is a string (for JSON encoding)
384
+
385
+ @property
386
+ def use_return_dict(self) -> bool:
387
+ """
388
+ `bool`: Whether or not return [`~utils.ModelOutput`] instead of tuples.
389
+ """
390
+ # If torchscript is set, force `return_dict=False` to avoid jit errors
391
+ return self.return_dict and not self.torchscript
392
+
393
+ @property
394
+ def num_labels(self) -> int:
395
+ """
396
+ `int`: The number of labels for classification models.
397
+ """
398
+ return len(self.id2label)
399
+
400
+ @num_labels.setter
401
+ def num_labels(self, num_labels: int):
402
+ if not hasattr(self, "id2label") or self.id2label is None or len(self.id2label) != num_labels:
403
+ self.id2label = {i: f"LABEL_{i}" for i in range(num_labels)}
404
+ self.label2id = dict(zip(self.id2label.values(), self.id2label.keys()))
405
+
406
+ @property
407
+ def _attn_implementation(self):
408
+ # This property is made private for now (as it cannot be changed and a PreTrainedModel.use_attn_implementation method needs to be implemented.)
409
+ if hasattr(self, "_attn_implementation_internal"):
410
+ if self._attn_implementation_internal is None:
411
+ # `config.attn_implementation` should never be None, for backward compatibility.
412
+ return "eager"
413
+ else:
414
+ return self._attn_implementation_internal
415
+ else:
416
+ return "eager"
417
+
418
+ @_attn_implementation.setter
419
+ def _attn_implementation(self, value):
420
+ self._attn_implementation_internal = value
421
+
422
+ def save_pretrained(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs):
423
+ """
424
+ Save a configuration object to the directory `save_directory`, so that it can be re-loaded using the
425
+ [`~PretrainedConfig.from_pretrained`] class method.
426
+
427
+ Args:
428
+ save_directory (`str` or `os.PathLike`):
429
+ Directory where the configuration JSON file will be saved (will be created if it does not exist).
430
+ push_to_hub (`bool`, *optional*, defaults to `False`):
431
+ Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
432
+ repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
433
+ namespace).
434
+ kwargs (`Dict[str, Any]`, *optional*):
435
+ Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
436
+ """
437
+ self._set_token_in_kwargs(kwargs)
438
+
439
+ if os.path.isfile(save_directory):
440
+ raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file")
441
+
442
+ non_default_generation_parameters = {}
443
+ for parameter_name, default_value in self._get_generation_defaults().items():
444
+ if hasattr(self, parameter_name) and getattr(self, parameter_name) != default_value:
445
+ non_default_generation_parameters[parameter_name] = getattr(self, parameter_name)
446
+ if len(non_default_generation_parameters) > 0:
447
+ logger.warning(
448
+ "Some non-default generation parameters are set in the model config. These should go into a "
449
+ "GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) "
450
+ "instead. This warning will be raised to an exception in v4.41.\n"
451
+ f"Non-default generation parameters: {str(non_default_generation_parameters)}"
452
+ )
453
+
454
+ os.makedirs(save_directory, exist_ok=True)
455
+
456
+ if push_to_hub:
457
+ commit_message = kwargs.pop("commit_message", None)
458
+ repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1])
459
+ repo_id = self._create_repo(repo_id, **kwargs)
460
+ files_timestamps = self._get_files_timestamps(save_directory)
461
+
462
+ # If we have a custom config, we copy the file defining it in the folder and set the attributes so it can be
463
+ # loaded from the Hub.
464
+ if self._auto_class is not None:
465
+ custom_object_save(self, save_directory, config=self)
466
+
467
+ # If we save using the predefined names, we can load using `from_pretrained`
468
+ output_config_file = os.path.join(save_directory, CONFIG_NAME)
469
+
470
+ self.to_json_file(output_config_file, use_diff=True)
471
+ logger.info(f"Configuration saved in {output_config_file}")
472
+
473
+ if push_to_hub:
474
+ self._upload_modified_files(
475
+ save_directory,
476
+ repo_id,
477
+ files_timestamps,
478
+ commit_message=commit_message,
479
+ token=kwargs.get("token"),
480
+ )
481
+
482
+ @staticmethod
483
+ def _set_token_in_kwargs(kwargs, token=None):
484
+ """Temporary method to deal with `token` and `use_auth_token`.
485
+
486
+ This method is to avoid apply the same changes in all model config classes that overwrite `from_pretrained`.
487
+
488
+ Need to clean up `use_auth_token` in a follow PR.
489
+ """
490
+ # Some model config classes like CLIP define their own `from_pretrained` without the new argument `token` yet.
491
+ if token is None:
492
+ token = kwargs.pop("token", None)
493
+ use_auth_token = kwargs.pop("use_auth_token", None)
494
+
495
+ if use_auth_token is not None:
496
+ warnings.warn(
497
+ "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
498
+ FutureWarning,
499
+ )
500
+ if token is not None:
501
+ raise ValueError(
502
+ "`token` and `use_auth_token` are both specified. Please set only the argument `token`."
503
+ )
504
+ token = use_auth_token
505
+
506
+ if token is not None:
507
+ kwargs["token"] = token
508
+
509
+ @classmethod
510
+ def from_pretrained(
511
+ cls,
512
+ pretrained_model_name_or_path: Union[str, os.PathLike],
513
+ cache_dir: Optional[Union[str, os.PathLike]] = None,
514
+ force_download: bool = False,
515
+ local_files_only: bool = False,
516
+ token: Optional[Union[str, bool]] = None,
517
+ revision: str = "main",
518
+ **kwargs,
519
+ ) -> "PretrainedConfig":
520
+ r"""
521
+ Instantiate a [`PretrainedConfig`] (or a derived class) from a pretrained model configuration.
522
+
523
+ Args:
524
+ pretrained_model_name_or_path (`str` or `os.PathLike`):
525
+ This can be either:
526
+
527
+ - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
528
+ huggingface.co.
529
+ - a path to a *directory* containing a configuration file saved using the
530
+ [`~PretrainedConfig.save_pretrained`] method, e.g., `./my_model_directory/`.
531
+ - a path or url to a saved configuration JSON *file*, e.g., `./my_model_directory/configuration.json`.
532
+ cache_dir (`str` or `os.PathLike`, *optional*):
533
+ Path to a directory in which a downloaded pretrained model configuration should be cached if the
534
+ standard cache should not be used.
535
+ force_download (`bool`, *optional*, defaults to `False`):
536
+ Whether or not to force to (re-)download the configuration files and override the cached versions if
537
+ they exist.
538
+ resume_download (`bool`, *optional*, defaults to `False`):
539
+ Whether or not to delete incompletely received file. Attempts to resume the download if such a file
540
+ exists.
541
+ proxies (`Dict[str, str]`, *optional*):
542
+ A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
543
+ 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
544
+ token (`str` or `bool`, *optional*):
545
+ The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
546
+ the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
547
+ revision (`str`, *optional*, defaults to `"main"`):
548
+ The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
549
+ git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
550
+ identifier allowed by git.
551
+
552
+ <Tip>
553
+
554
+ To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>".
555
+
556
+ </Tip>
557
+
558
+ return_unused_kwargs (`bool`, *optional*, defaults to `False`):
559
+ If `False`, then this function returns just the final configuration object.
560
+
561
+ If `True`, then this functions returns a `Tuple(config, unused_kwargs)` where *unused_kwargs* is a
562
+ dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
563
+ part of `kwargs` which has not been used to update `config` and is otherwise ignored.
564
+ subfolder (`str`, *optional*, defaults to `""`):
565
+ In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can
566
+ specify the folder name here.
567
+ kwargs (`Dict[str, Any]`, *optional*):
568
+ The values in kwargs of any keys which are configuration attributes will be used to override the loaded
569
+ values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled
570
+ by the `return_unused_kwargs` keyword parameter.
571
+
572
+ Returns:
573
+ [`PretrainedConfig`]: The configuration object instantiated from this pretrained model.
574
+
575
+ Examples:
576
+
577
+ ```python
578
+ # We can't instantiate directly the base class *PretrainedConfig* so let's show the examples on a
579
+ # derived class: BertConfig
580
+ config = BertConfig.from_pretrained(
581
+ "google-bert/bert-base-uncased"
582
+ ) # Download configuration from huggingface.co and cache.
583
+ config = BertConfig.from_pretrained(
584
+ "./test/saved_model/"
585
+ ) # E.g. config (or model) was saved using *save_pretrained('./test/saved_model/')*
586
+ config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json")
587
+ config = BertConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
588
+ assert config.output_attentions == True
589
+ config, unused_kwargs = BertConfig.from_pretrained(
590
+ "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
591
+ )
592
+ assert config.output_attentions == True
593
+ assert unused_kwargs == {"foo": False}
594
+ ```"""
595
+ kwargs["cache_dir"] = cache_dir
596
+ kwargs["force_download"] = force_download
597
+ kwargs["local_files_only"] = local_files_only
598
+ kwargs["revision"] = revision
599
+
600
+ cls._set_token_in_kwargs(kwargs, token)
601
+
602
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
603
+ if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
604
+ logger.warning(
605
+ f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
606
+ f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
607
+ )
608
+
609
+ return cls.from_dict(config_dict, **kwargs)
610
+
611
+ @classmethod
612
+ def get_config_dict(
613
+ cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs
614
+ ) -> Tuple[Dict[str, Any], Dict[str, Any]]:
615
+ """
616
+ From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a
617
+ [`PretrainedConfig`] using `from_dict`.
618
+
619
+ Parameters:
620
+ pretrained_model_name_or_path (`str` or `os.PathLike`):
621
+ The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.
622
+
623
+ Returns:
624
+ `Tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the configuration object.
625
+
626
+ """
627
+ cls._set_token_in_kwargs(kwargs)
628
+
629
+ original_kwargs = copy.deepcopy(kwargs)
630
+ # Get config dict associated with the base config file
631
+ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
632
+ if "_commit_hash" in config_dict:
633
+ original_kwargs["_commit_hash"] = config_dict["_commit_hash"]
634
+
635
+ # That config file may point us toward another config file to use.
636
+ if "configuration_files" in config_dict:
637
+ configuration_file = get_configuration_file(config_dict["configuration_files"])
638
+ config_dict, kwargs = cls._get_config_dict(
639
+ pretrained_model_name_or_path, _configuration_file=configuration_file, **original_kwargs
640
+ )
641
+
642
+ return config_dict, kwargs
643
+
644
+ @classmethod
645
+ def _get_config_dict(
646
+ cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs
647
+ ) -> Tuple[Dict[str, Any], Dict[str, Any]]:
648
+ cache_dir = kwargs.pop("cache_dir", None)
649
+ force_download = kwargs.pop("force_download", False)
650
+ resume_download = kwargs.pop("resume_download", False)
651
+ proxies = kwargs.pop("proxies", None)
652
+ token = kwargs.pop("token", None)
653
+ local_files_only = kwargs.pop("local_files_only", False)
654
+ revision = kwargs.pop("revision", None)
655
+ trust_remote_code = kwargs.pop("trust_remote_code", None)
656
+ subfolder = kwargs.pop("subfolder", "")
657
+ from_pipeline = kwargs.pop("_from_pipeline", None)
658
+ from_auto_class = kwargs.pop("_from_auto", False)
659
+ commit_hash = kwargs.pop("_commit_hash", None)
660
+
661
+ if trust_remote_code is True:
662
+ logger.warning(
663
+ "The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is"
664
+ " ignored."
665
+ )
666
+
667
+ user_agent = {"file_type": "config", "from_auto_class": from_auto_class}
668
+ if from_pipeline is not None:
669
+ user_agent["using_pipeline"] = from_pipeline
670
+
671
+ pretrained_model_name_or_path = str(pretrained_model_name_or_path)
672
+
673
+ is_local = os.path.isdir(pretrained_model_name_or_path)
674
+ if os.path.isfile(os.path.join(subfolder, pretrained_model_name_or_path)):
675
+ # Special case when pretrained_model_name_or_path is a local file
676
+ resolved_config_file = pretrained_model_name_or_path
677
+ is_local = True
678
+ elif is_remote_url(pretrained_model_name_or_path):
679
+ configuration_file = pretrained_model_name_or_path
680
+ resolved_config_file = download_url(pretrained_model_name_or_path)
681
+ else:
682
+ configuration_file = kwargs.pop("_configuration_file", CONFIG_NAME)
683
+
684
+ try:
685
+ # Load from local folder or from cache or download from model Hub and cache
686
+ resolved_config_file = cached_file(
687
+ pretrained_model_name_or_path,
688
+ configuration_file,
689
+ cache_dir=cache_dir,
690
+ force_download=force_download,
691
+ proxies=proxies,
692
+ resume_download=resume_download,
693
+ local_files_only=local_files_only,
694
+ token=token,
695
+ user_agent=user_agent,
696
+ revision=revision,
697
+ subfolder=subfolder,
698
+ _commit_hash=commit_hash,
699
+ )
700
+ commit_hash = extract_commit_hash(resolved_config_file, commit_hash)
701
+ except EnvironmentError:
702
+ # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to
703
+ # the original exception.
704
+ raise
705
+ except Exception:
706
+ # For any other exception, we throw a generic error.
707
+ raise EnvironmentError(
708
+ f"Can't load the configuration of '{pretrained_model_name_or_path}'. If you were trying to load it"
709
+ " from 'https://huggingface.co/models', make sure you don't have a local directory with the same"
710
+ f" name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory"
711
+ f" containing a {configuration_file} file"
712
+ )
713
+
714
+ try:
715
+ # Load config dict
716
+ config_dict = cls._dict_from_json_file(resolved_config_file)
717
+ config_dict["_commit_hash"] = commit_hash
718
+ except (json.JSONDecodeError, UnicodeDecodeError):
719
+ raise EnvironmentError(
720
+ f"It looks like the config file at '{resolved_config_file}' is not a valid JSON file."
721
+ )
722
+
723
+ if is_local:
724
+ logger.info(f"loading configuration file {resolved_config_file}")
725
+ else:
726
+ logger.info(f"loading configuration file {configuration_file} from cache at {resolved_config_file}")
727
+
728
+ if "auto_map" in config_dict and not is_local:
729
+ config_dict["auto_map"] = add_model_info_to_auto_map(
730
+ config_dict["auto_map"], pretrained_model_name_or_path
731
+ )
732
+ return config_dict, kwargs
733
+
734
+ @classmethod
735
+ def from_dict(cls, config_dict: Dict[str, Any], **kwargs) -> "PretrainedConfig":
736
+ """
737
+ Instantiates a [`PretrainedConfig`] from a Python dictionary of parameters.
738
+
739
+ Args:
740
+ config_dict (`Dict[str, Any]`):
741
+ Dictionary that will be used to instantiate the configuration object. Such a dictionary can be
742
+ retrieved from a pretrained checkpoint by leveraging the [`~PretrainedConfig.get_config_dict`] method.
743
+ kwargs (`Dict[str, Any]`):
744
+ Additional parameters from which to initialize the configuration object.
745
+
746
+ Returns:
747
+ [`PretrainedConfig`]: The configuration object instantiated from those parameters.
748
+ """
749
+ return_unused_kwargs = kwargs.pop("return_unused_kwargs", False)
750
+ # Those arguments may be passed along for our internal telemetry.
751
+ # We remove them so they don't appear in `return_unused_kwargs`.
752
+ kwargs.pop("_from_auto", None)
753
+ kwargs.pop("_from_pipeline", None)
754
+ # The commit hash might have been updated in the `config_dict`, we don't want the kwargs to erase that update.
755
+ if "_commit_hash" in kwargs and "_commit_hash" in config_dict:
756
+ kwargs["_commit_hash"] = config_dict["_commit_hash"]
757
+
758
+ # We remove it from kwargs so that it does not appear in `return_unused_kwargs`.
759
+ config_dict["attn_implementation"] = kwargs.pop("attn_implementation", None)
760
+
761
+ config = cls(**config_dict)
762
+
763
+ if hasattr(config, "pruned_heads"):
764
+ config.pruned_heads = {int(key): value for key, value in config.pruned_heads.items()}
765
+
766
+ # Update config with kwargs if needed
767
+ if "num_labels" in kwargs and "id2label" in kwargs:
768
+ num_labels = kwargs["num_labels"]
769
+ id2label = kwargs["id2label"] if kwargs["id2label"] is not None else []
770
+ if len(id2label) != num_labels:
771
+ raise ValueError(
772
+ f"You passed along `num_labels={num_labels }` with an incompatible id to label map: "
773
+ f"{kwargs['id2label']}. Since those arguments are inconsistent with each other, you should remove "
774
+ "one of them."
775
+ )
776
+ to_remove = []
777
+ for key, value in kwargs.items():
778
+ if hasattr(config, key):
779
+ current_attr = getattr(config, key)
780
+ # To authorize passing a custom subconfig as kwarg in models that have nested configs.
781
+ if isinstance(current_attr, PretrainedConfig) and isinstance(value, dict):
782
+ value = current_attr.__class__(**value)
783
+ setattr(config, key, value)
784
+ if key != "torch_dtype":
785
+ to_remove.append(key)
786
+ for key in to_remove:
787
+ kwargs.pop(key, None)
788
+
789
+ logger.info(f"Model config {config}")
790
+ if return_unused_kwargs:
791
+ return config, kwargs
792
+ else:
793
+ return config
794
+
795
+ @classmethod
796
+ def from_json_file(cls, json_file: Union[str, os.PathLike]) -> "PretrainedConfig":
797
+ """
798
+ Instantiates a [`PretrainedConfig`] from the path to a JSON file of parameters.
799
+
800
+ Args:
801
+ json_file (`str` or `os.PathLike`):
802
+ Path to the JSON file containing the parameters.
803
+
804
+ Returns:
805
+ [`PretrainedConfig`]: The configuration object instantiated from that JSON file.
806
+
807
+ """
808
+ config_dict = cls._dict_from_json_file(json_file)
809
+ return cls(**config_dict)
810
+
811
+ @classmethod
812
+ def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
813
+ with open(json_file, "r", encoding="utf-8") as reader:
814
+ text = reader.read()
815
+ return json.loads(text)
816
+
817
+ def __eq__(self, other):
818
+ return isinstance(other, PretrainedConfig) and (self.__dict__ == other.__dict__)
819
+
820
+ def __repr__(self):
821
+ return f"{self.__class__.__name__} {self.to_json_string()}"
822
+
823
+ def to_diff_dict(self) -> Dict[str, Any]:
824
+ """
825
+ Removes all attributes from config which correspond to the default config attributes for better readability and
826
+ serializes to a Python dictionary.
827
+
828
+ Returns:
829
+ `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
830
+ """
831
+ config_dict = self.to_dict()
832
+
833
+ # get the default config dict
834
+ default_config_dict = PretrainedConfig().to_dict()
835
+
836
+ # get class specific config dict
837
+ class_config_dict = self.__class__().to_dict() if not self.is_composition else {}
838
+
839
+ serializable_config_dict = {}
840
+
841
+ # only serialize values that differ from the default config
842
+ for key, value in config_dict.items():
843
+ if (
844
+ isinstance(getattr(self, key, None), PretrainedConfig)
845
+ and key in class_config_dict
846
+ and isinstance(class_config_dict[key], dict)
847
+ ):
848
+ # For nested configs we need to clean the diff recursively
849
+ diff = recursive_diff_dict(value, class_config_dict[key], config_obj=getattr(self, key, None))
850
+ if "model_type" in value:
851
+ # Needs to be set even if it's not in the diff
852
+ diff["model_type"] = value["model_type"]
853
+ if len(diff) > 0:
854
+ serializable_config_dict[key] = diff
855
+ elif (
856
+ key not in default_config_dict
857
+ or key == "transformers_version"
858
+ or value != default_config_dict[key]
859
+ or (key in class_config_dict and value != class_config_dict[key])
860
+ ):
861
+ serializable_config_dict[key] = value
862
+
863
+ if hasattr(self, "quantization_config"):
864
+ serializable_config_dict["quantization_config"] = (
865
+ self.quantization_config.to_dict()
866
+ if not isinstance(self.quantization_config, dict)
867
+ else self.quantization_config
868
+ )
869
+
870
+ # pop the `_pre_quantization_dtype` as torch.dtypes are not serializable.
871
+ _ = serializable_config_dict.pop("_pre_quantization_dtype", None)
872
+
873
+ self.dict_torch_dtype_to_str(serializable_config_dict)
874
+
875
+ if "_attn_implementation_internal" in serializable_config_dict:
876
+ del serializable_config_dict["_attn_implementation_internal"]
877
+
878
+ return serializable_config_dict
879
+
880
+ def to_dict(self) -> Dict[str, Any]:
881
+ """
882
+ Serializes this instance to a Python dictionary.
883
+
884
+ Returns:
885
+ `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
886
+ """
887
+ output = copy.deepcopy(self.__dict__)
888
+ if hasattr(self.__class__, "model_type"):
889
+ output["model_type"] = self.__class__.model_type
890
+ if "_auto_class" in output:
891
+ del output["_auto_class"]
892
+ if "_commit_hash" in output:
893
+ del output["_commit_hash"]
894
+ if "_attn_implementation_internal" in output:
895
+ del output["_attn_implementation_internal"]
896
+
897
+ # Transformers version when serializing the model
898
+ output["transformers_version"] = __version__
899
+
900
+ for key, value in output.items():
901
+ # Deal with nested configs like CLIP
902
+ if isinstance(value, PretrainedConfig):
903
+ value = value.to_dict()
904
+ del value["transformers_version"]
905
+
906
+ output[key] = value
907
+
908
+ if hasattr(self, "quantization_config"):
909
+ output["quantization_config"] = (
910
+ self.quantization_config.to_dict()
911
+ if not isinstance(self.quantization_config, dict)
912
+ else self.quantization_config
913
+ )
914
+
915
+ # pop the `_pre_quantization_dtype` as torch.dtypes are not serializable.
916
+ _ = output.pop("_pre_quantization_dtype", None)
917
+
918
+ self.dict_torch_dtype_to_str(output)
919
+
920
+ return output
921
+
922
+ def to_json_string(self, use_diff: bool = True) -> str:
923
+ """
924
+ Serializes this instance to a JSON string.
925
+
926
+ Args:
927
+ use_diff (`bool`, *optional*, defaults to `True`):
928
+ If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
929
+ is serialized to JSON string.
930
+
931
+ Returns:
932
+ `str`: String containing all the attributes that make up this configuration instance in JSON format.
933
+ """
934
+ if use_diff is True:
935
+ config_dict = self.to_diff_dict()
936
+ else:
937
+ config_dict = self.to_dict()
938
+ return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
939
+
940
+ def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True):
941
+ """
942
+ Save this instance to a JSON file.
943
+
944
+ Args:
945
+ json_file_path (`str` or `os.PathLike`):
946
+ Path to the JSON file in which this configuration instance's parameters will be saved.
947
+ use_diff (`bool`, *optional*, defaults to `True`):
948
+ If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
949
+ is serialized to JSON file.
950
+ """
951
+ with open(json_file_path, "w", encoding="utf-8") as writer:
952
+ writer.write(self.to_json_string(use_diff=use_diff))
953
+
954
+ def update(self, config_dict: Dict[str, Any]):
955
+ """
956
+ Updates attributes of this class with attributes from `config_dict`.
957
+
958
+ Args:
959
+ config_dict (`Dict[str, Any]`): Dictionary of attributes that should be updated for this class.
960
+ """
961
+ for key, value in config_dict.items():
962
+ setattr(self, key, value)
963
+
964
+ def update_from_string(self, update_str: str):
965
+ """
966
+ Updates attributes of this class with attributes from `update_str`.
967
+
968
+ The expected format is ints, floats and strings as is, and for booleans use `true` or `false`. For example:
969
+ "n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"
970
+
971
+ The keys to change have to already exist in the config object.
972
+
973
+ Args:
974
+ update_str (`str`): String with attributes that should be updated for this class.
975
+
976
+ """
977
+
978
+ d = dict(x.split("=") for x in update_str.split(","))
979
+ for k, v in d.items():
980
+ if not hasattr(self, k):
981
+ raise ValueError(f"key {k} isn't in the original config dict")
982
+
983
+ old_v = getattr(self, k)
984
+ if isinstance(old_v, bool):
985
+ if v.lower() in ["true", "1", "y", "yes"]:
986
+ v = True
987
+ elif v.lower() in ["false", "0", "n", "no"]:
988
+ v = False
989
+ else:
990
+ raise ValueError(f"can't derive true or false from {v} (key {k})")
991
+ elif isinstance(old_v, int):
992
+ v = int(v)
993
+ elif isinstance(old_v, float):
994
+ v = float(v)
995
+ elif not isinstance(old_v, str):
996
+ raise ValueError(
997
+ f"You can only update int, float, bool or string values in the config, got {v} for key {k}"
998
+ )
999
+
1000
+ setattr(self, k, v)
1001
+
1002
+ def dict_torch_dtype_to_str(self, d: Dict[str, Any]) -> None:
1003
+ """
1004
+ Checks whether the passed dictionary and its nested dicts have a *torch_dtype* key and if it's not None,
1005
+ converts torch.dtype to a string of just the type. For example, `torch.float32` get converted into *"float32"*
1006
+ string, which can then be stored in the json format.
1007
+ """
1008
+ if d.get("torch_dtype", None) is not None and not isinstance(d["torch_dtype"], str):
1009
+ d["torch_dtype"] = str(d["torch_dtype"]).split(".")[1]
1010
+ for value in d.values():
1011
+ if isinstance(value, dict):
1012
+ self.dict_torch_dtype_to_str(value)
1013
+
1014
+ @classmethod
1015
+ def register_for_auto_class(cls, auto_class="AutoConfig"):
1016
+ """
1017
+ Register this class with a given auto class. This should only be used for custom configurations as the ones in
1018
+ the library are already mapped with `AutoConfig`.
1019
+
1020
+ <Tip warning={true}>
1021
+
1022
+ This API is experimental and may have some slight breaking changes in the next releases.
1023
+
1024
+ </Tip>
1025
+
1026
+ Args:
1027
+ auto_class (`str` or `type`, *optional*, defaults to `"AutoConfig"`):
1028
+ The auto class to register this new configuration with.
1029
+ """
1030
+ if not isinstance(auto_class, str):
1031
+ auto_class = auto_class.__name__
1032
+
1033
+ import transformers.models.auto as auto_module
1034
+
1035
+ if not hasattr(auto_module, auto_class):
1036
+ raise ValueError(f"{auto_class} is not a valid auto class.")
1037
+
1038
+ cls._auto_class = auto_class
1039
+
1040
+ @staticmethod
1041
+ def _get_generation_defaults() -> Dict[str, Any]:
1042
+ return {
1043
+ "max_length": 20,
1044
+ "min_length": 0,
1045
+ "do_sample": False,
1046
+ "early_stopping": False,
1047
+ "num_beams": 1,
1048
+ "num_beam_groups": 1,
1049
+ "diversity_penalty": 0.0,
1050
+ "temperature": 1.0,
1051
+ "top_k": 50,
1052
+ "top_p": 1.0,
1053
+ "typical_p": 1.0,
1054
+ "repetition_penalty": 1.0,
1055
+ "length_penalty": 1.0,
1056
+ "no_repeat_ngram_size": 0,
1057
+ "encoder_no_repeat_ngram_size": 0,
1058
+ "bad_words_ids": None,
1059
+ "num_return_sequences": 1,
1060
+ "output_scores": False,
1061
+ "return_dict_in_generate": False,
1062
+ "forced_bos_token_id": None,
1063
+ "forced_eos_token_id": None,
1064
+ "remove_invalid_values": False,
1065
+ "exponential_decay_length_penalty": None,
1066
+ "suppress_tokens": None,
1067
+ "begin_suppress_tokens": None,
1068
+ }
1069
+
1070
+ def _has_non_default_generation_parameters(self) -> bool:
1071
+ """
1072
+ Whether or not this instance holds non-default generation parameters.
1073
+ """
1074
+ for parameter_name, default_value in self._get_generation_defaults().items():
1075
+ if hasattr(self, parameter_name) and getattr(self, parameter_name) != default_value:
1076
+ return True
1077
+ return False
1078
+
1079
+
1080
+ def get_configuration_file(configuration_files: List[str]) -> str:
1081
+ """
1082
+ Get the configuration file to use for this version of transformers.
1083
+
1084
+ Args:
1085
+ configuration_files (`List[str]`): The list of available configuration files.
1086
+
1087
+ Returns:
1088
+ `str`: The configuration file to use.
1089
+ """
1090
+ configuration_files_map = {}
1091
+ for file_name in configuration_files:
1092
+ search = _re_configuration_file.search(file_name)
1093
+ if search is not None:
1094
+ v = search.groups()[0]
1095
+ configuration_files_map[v] = file_name
1096
+ available_versions = sorted(configuration_files_map.keys())
1097
+
1098
+ # Defaults to FULL_CONFIGURATION_FILE and then try to look at some newer versions.
1099
+ configuration_file = CONFIG_NAME
1100
+ transformers_version = version.parse(__version__)
1101
+ for v in available_versions:
1102
+ if version.parse(v) <= transformers_version:
1103
+ configuration_file = configuration_files_map[v]
1104
+ else:
1105
+ # No point going further since the versions are sorted.
1106
+ break
1107
+
1108
+ return configuration_file
1109
+
1110
+
1111
+ def recursive_diff_dict(dict_a, dict_b, config_obj=None):
1112
+ """
1113
+ Helper function to recursively take the diff between two nested dictionaries. The resulting diff only contains the
1114
+ values from `dict_a` that are different from values in `dict_b`.
1115
+ """
1116
+ diff = {}
1117
+ default = config_obj.__class__().to_dict() if config_obj is not None else {}
1118
+ for key, value in dict_a.items():
1119
+ obj_value = getattr(config_obj, str(key), None)
1120
+ if isinstance(obj_value, PretrainedConfig) and key in dict_b and isinstance(dict_b[key], dict):
1121
+ diff_value = recursive_diff_dict(value, dict_b[key], config_obj=obj_value)
1122
+ if len(diff_value) > 0:
1123
+ diff[key] = diff_value
1124
+ elif key not in dict_b or value != dict_b[key] or key not in default or value != default[key]:
1125
+ diff[key] = value
1126
+ return diff
1127
+
1128
+
1129
+ PretrainedConfig.push_to_hub = copy_func(PretrainedConfig.push_to_hub)
1130
+ if PretrainedConfig.push_to_hub.__doc__ is not None:
1131
+ PretrainedConfig.push_to_hub.__doc__ = PretrainedConfig.push_to_hub.__doc__.format(
1132
+ object="config", object_class="AutoConfig", object_files="configuration file"
1133
+ )