Model save
Browse files- README.md +42 -45
- adapter_config.json +2 -2
- adapter_model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -1,73 +1,70 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
-
library_name:
|
4 |
tags:
|
5 |
-
-
|
6 |
-
-
|
7 |
-
|
|
|
8 |
model-index:
|
9 |
-
- name:
|
10 |
results: []
|
11 |
-
language: ['pt']
|
12 |
---
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
|
17 |
|
|
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
## Intended uses & limitations
|
22 |
|
23 |
-
|
24 |
|
25 |
-
## Training
|
26 |
|
27 |
-
|
28 |
|
29 |
-
|
30 |
-
Pt:
|
31 |
-
{'question': '### Instruction:\nVocê é um médico tratando um paciente com amnésia. Para responder as perguntas do paciente, você irá ler um texto anteriormente para se contextualizar. Se você trouxer informações desconhecidas, fora do texto lido, poderá deixar o paciente confuso. Se o paciente fizer uma questão sobre informações não presentes no texto, você precisa responder de forma educada que você não tem informação suficiente para responder, pois se tentar responder, pode trazer informações que não ajudarão o paciente recuperar sua memória.Lembre, se não estiver no texto, você precisa responder de forma educada que você não tem informação suficiente para responder. Precisamos ajudar o paciente.\n</s>### Input:\nTEXTO: {context}\n\nPERGUNTA: {question}\n</s>', 'chosen_response': '### Response:\nRESPOSTA: {chosen_response}</s>', 'rejected_response': '### Response:\nRESPOSTA: {rejected_response}</s>'}
|
32 |
-
```
|
33 |
|
34 |
### Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
- learning_rate: 2e-05
|
38 |
-
-
|
39 |
-
-
|
|
|
40 |
- gradient_accumulation_steps: 2
|
41 |
-
- num_gpus: 1
|
42 |
- total_train_batch_size: 4
|
43 |
-
- optimizer:
|
44 |
-
- lr_scheduler_type:
|
45 |
-
-
|
46 |
-
-
|
47 |
-
-
|
48 |
|
49 |
### Training results
|
50 |
|
|
|
|
|
51 |
### Framework versions
|
52 |
|
53 |
-
-
|
54 |
-
-
|
55 |
-
-
|
56 |
-
-
|
57 |
-
-
|
58 |
-
- bitsandbytes==0.42
|
59 |
-
- huggingface_hub==0.20.3
|
60 |
-
- seqeval==1.2.2
|
61 |
-
- optimum==1.17.1
|
62 |
-
- auto-gptq==0.7.0
|
63 |
-
- gpustat==1.1.1
|
64 |
-
- deepspeed==0.13.2
|
65 |
-
- wandb==0.16.3
|
66 |
-
- trl==0.7.11
|
67 |
-
- accelerate==0.27.2
|
68 |
-
- coloredlogs==15.0.1
|
69 |
-
- traitlets==5.14.1
|
70 |
-
- autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
|
71 |
-
|
72 |
-
### Hardware
|
73 |
-
- Cloud provided: runpod.io
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
library_name: peft
|
4 |
tags:
|
5 |
+
- trl
|
6 |
+
- dpo
|
7 |
+
- generated_from_trainer
|
8 |
+
base_model: HuggingFaceH4/zephyr-7b-beta
|
9 |
model-index:
|
10 |
+
- name: WeniGPT-DPO-test
|
11 |
results: []
|
|
|
12 |
---
|
13 |
|
14 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
+
should probably proofread and complete it, then remove this comment. -->
|
16 |
|
17 |
+
# WeniGPT-DPO-test
|
18 |
|
19 |
+
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.6931
|
22 |
+
- Rewards/chosen: 0.0
|
23 |
+
- Rewards/rejected: 0.0
|
24 |
+
- Rewards/accuracies: 0.0
|
25 |
+
- Rewards/margins: 0.0
|
26 |
+
- Logps/rejected: -10.8694
|
27 |
+
- Logps/chosen: -6.4376
|
28 |
+
- Logits/rejected: -2.2188
|
29 |
+
- Logits/chosen: -2.2174
|
30 |
+
|
31 |
+
## Model description
|
32 |
+
|
33 |
+
More information needed
|
34 |
|
35 |
## Intended uses & limitations
|
36 |
|
37 |
+
More information needed
|
38 |
|
39 |
+
## Training and evaluation data
|
40 |
|
41 |
+
More information needed
|
42 |
|
43 |
+
## Training procedure
|
|
|
|
|
|
|
44 |
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
- learning_rate: 2e-05
|
49 |
+
- train_batch_size: 2
|
50 |
+
- eval_batch_size: 2
|
51 |
+
- seed: 42
|
52 |
- gradient_accumulation_steps: 2
|
|
|
53 |
- total_train_batch_size: 4
|
54 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
55 |
+
- lr_scheduler_type: linear
|
56 |
+
- lr_scheduler_warmup_ratio: 0.1
|
57 |
+
- training_steps: 1
|
58 |
+
- mixed_precision_training: Native AMP
|
59 |
|
60 |
### Training results
|
61 |
|
62 |
+
|
63 |
+
|
64 |
### Framework versions
|
65 |
|
66 |
+
- PEFT 0.8.2
|
67 |
+
- Transformers 4.38.2
|
68 |
+
- Pytorch 2.1.0+cu118
|
69 |
+
- Datasets 2.17.1
|
70 |
+
- Tokenizers 0.15.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
adapter_config.json
CHANGED
@@ -19,9 +19,9 @@
|
|
19 |
"rank_pattern": {},
|
20 |
"revision": null,
|
21 |
"target_modules": [
|
22 |
-
"q_proj",
|
23 |
-
"v_proj",
|
24 |
"k_proj",
|
|
|
|
|
25 |
"o_proj"
|
26 |
],
|
27 |
"task_type": "CAUSAL_LM",
|
|
|
19 |
"rank_pattern": {},
|
20 |
"revision": null,
|
21 |
"target_modules": [
|
|
|
|
|
22 |
"k_proj",
|
23 |
+
"v_proj",
|
24 |
+
"q_proj",
|
25 |
"o_proj"
|
26 |
],
|
27 |
"task_type": "CAUSAL_LM",
|
adapter_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 27297032
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:609ff106b41c08ad5ef37838383ab09c7fbee92554a03cdadef8ad9bd3cde402
|
3 |
size 27297032
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5112
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0265c7f3725c39111fac09340c90bf9088f63f1187513fe58e1a06a4bcd09c40
|
3 |
size 5112
|