Update README.md
bcce68a
-
1_Pooling
first model
-
eval
first model
-
6.15 kB
improving model
-
1.52 kB
initial commit
-
292 Bytes
Update README.md
-
784 Bytes
first model
-
124 Bytes
first model
-
349 Bytes
first model
pytorch_model.bin
Detected Pickle imports (36)
- "transformers.models.bert.modeling_bert.BertEncoder",
- "transformers.models.bert.modeling_bert.BertIntermediate",
- "torch.nn.modules.normalization.LayerNorm",
- "sentence_transformers.models.Pooling.Pooling",
- "torch.nn.modules.sparse.Embedding",
- "torch._C._nn.gelu",
- "transformers.models.bert.modeling_bert.BertSelfAttention",
- "tokenizers.models.Model",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.bert.modeling_bert.BertModel",
- "__builtin__.set",
- "torch.FloatStorage",
- "transformers.models.bert.configuration_bert.BertConfig",
- "sentence_transformers.models.Normalize.Normalize",
- "torch.nn.modules.dropout.Dropout",
- "torch.device",
- "transformers.models.bert.modeling_bert.BertAttention",
- "transformers.models.bert.modeling_bert.BertSelfOutput",
- "transformers.activations.GELUActivation",
- "torch.nn.modules.activation.Tanh",
- "transformers.models.bert.tokenization_bert_fast.BertTokenizerFast",
- "collections.OrderedDict",
- "sentence_transformers.models.Transformer.Transformer",
- "transformers.models.bert.modeling_bert.BertEmbeddings",
- "torch.float32",
- "tokenizers.AddedToken",
- "torch.LongStorage",
- "tokenizers.Tokenizer",
- "transformers.models.bert.modeling_bert.BertPooler",
- "torch.nn.modules.linear.Linear",
- "transformers.models.bert.modeling_bert.BertLayer",
- "sentence_transformers.SentenceTransformer.SentenceTransformer",
- "torch._utils._rebuild_parameter",
- "_codecs.encode",
- "transformers.models.bert.modeling_bert.BertOutput"
How to fix it?
1.34 GB
ADDING PYTORCH MODEL
-
52 Bytes
first model
-
125 Bytes
first model
-
712 kB
first model
-
1.24 kB
first model
-
232 kB
first model