Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Hum-Works
/
lodestone-base-4096-v1
like
11
Follow
Hum LLC
4
Sentence Similarity
sentence-transformers
PyTorch
26 datasets
English
bert
feature-extraction
mteb
custom_code
Eval Results
text-embeddings-inference
arxiv:
6 papers
License:
apache-2.0
Model card
Files
Files and versions
Community
4
Train
Deploy
Use this model
main
lodestone-base-4096-v1
3 contributors
History:
12 commits
dylanAtHum
Update Replication Instructions to Use Script to Load Pretrained Model
9bbc2d0
about 1 year ago
1_Pooling
Initial Commit
about 1 year ago
mteb_results
Add CQADupstack Benchmarks
about 1 year ago
.gitattributes
Safe
1.52 kB
initial commit
about 1 year ago
Data_Records.ipynb
Safe
3.03 kB
Initial Commit
about 1 year ago
Dataloading.ipynb
Safe
39.9 kB
Initial Commit
about 1 year ago
README.md
Safe
79.9 kB
Update README
about 1 year ago
Replication.txt
Safe
4.22 kB
Update Replication Instructions to Use Script to Load Pretrained Model
about 1 year ago
Training.py
Safe
19 kB
Initial Commit
about 1 year ago
bert_layers.py
Safe
47.3 kB
Initial Commit
about 1 year ago
bert_padding.py
Safe
6.26 kB
Initial Commit
about 1 year ago
config.json
Safe
891 Bytes
Using seq_length Config Rather than max_position_embeddings
about 1 year ago
config_sentence_transformers.json
Safe
123 Bytes
Initial Commit
about 1 year ago
configuration_bert.py
Safe
1.01 kB
Initial Commit
about 1 year ago
data_records.json
Safe
58.5 kB
Initial Commit
about 1 year ago
flash_attn_triton.py
Safe
42.7 kB
Initial Commit
about 1 year ago
load_mosaic.py
Safe
808 Bytes
Update Replication Instructions to Use Script to Load Pretrained Model
about 1 year ago
modules.json
Safe
349 Bytes
Initial Commit
about 1 year ago
pytorch_model.bin
Safe
pickle
Detected Pickle imports (3)
"torch.BFloat16Storage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
275 MB
LFS
Initial Commit
about 1 year ago
sentence_bert_config.json
Safe
54 Bytes
Initial Commit
about 1 year ago
special_tokens_map.json
Safe
125 Bytes
Initial Commit
about 1 year ago
tokenizer.json
Safe
712 kB
Initial Commit
about 1 year ago
tokenizer_config.json
Safe
315 Bytes
Initial Commit
about 1 year ago
vocab.txt
Safe
232 kB
Initial Commit
about 1 year ago