HugoVoxx commited on
Commit
8771706
1 Parent(s): 12cac54

HugoVoxx/Gemma-2-2b-it-ag

Browse files
README.md CHANGED
@@ -1,68 +1,57 @@
1
  ---
2
  base_model: google/gemma-2-2b-it
3
- library_name: peft
4
- license: gemma
5
  tags:
 
6
  - trl
7
  - sft
8
- - generated_from_trainer
9
- model-index:
10
- - name: Gemma-2-2b-it-ag
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # Gemma-2-2b-it-ag
18
-
19
- This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.0122
22
-
23
- ## Model description
24
 
25
- More information needed
 
26
 
27
- ## Intended uses & limitations
28
 
29
- More information needed
 
30
 
31
- ## Training and evaluation data
32
-
33
- More information needed
 
 
34
 
35
  ## Training procedure
36
 
37
- ### Training hyperparameters
 
 
38
 
39
- The following hyperparameters were used during training:
40
- - learning_rate: 0.0002
41
- - train_batch_size: 1
42
- - eval_batch_size: 1
43
- - seed: 42
44
- - gradient_accumulation_steps: 2
45
- - total_train_batch_size: 2
46
- - optimizer: Use paged_adamw_32bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
- - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 10
49
- - num_epochs: 1
50
 
51
- ### Training results
 
 
 
 
52
 
53
- | Training Loss | Epoch | Step | Validation Loss |
54
- |:-------------:|:------:|:----:|:---------------:|
55
- | 0.0122 | 0.1999 | 341 | 0.0122 |
56
- | 0.0122 | 0.3999 | 682 | 0.0122 |
57
- | 0.0122 | 0.5998 | 1023 | 0.0122 |
58
- | 0.0122 | 0.7998 | 1364 | 0.0122 |
59
- | 0.0122 | 0.9997 | 1705 | 0.0122 |
60
 
61
 
62
- ### Framework versions
63
 
64
- - PEFT 0.13.2
65
- - Transformers 4.46.1
66
- - Pytorch 2.4.0
67
- - Datasets 3.0.2
68
- - Tokenizers 0.20.0
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: google/gemma-2-2b-it
3
+ library_name: transformers
4
+ model_name: Gemma-2-2b-it-ag
5
  tags:
6
+ - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
 
10
  ---
11
 
12
+ # Model Card for Gemma-2-2b-it-ag
 
 
 
 
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="HugoVoxx/Gemma-2-2b-it-ag", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hugovoxx-fpt-university/Fine-tune%20Gemma-2-2b-it%20on%20AlphaGeometry%20Dataset/runs/xdfyig16)
31
+
32
+ This model was trained with SFT.
33
 
34
+ ### Framework versions
 
 
 
 
 
 
 
 
 
 
35
 
36
+ - TRL: 0.12.0
37
+ - Transformers: 4.46.1
38
+ - Pytorch: 2.4.0
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.20.0
41
 
42
+ ## Citations
 
 
 
 
 
 
43
 
44
 
 
45
 
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
+ ```
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "v_proj",
 
24
  "q_proj",
25
- "down_proj",
26
- "o_proj",
27
- "up_proj",
28
  "gate_proj",
29
- "k_proj"
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "up_proj",
24
  "v_proj",
25
+ "k_proj",
26
  "q_proj",
 
 
 
27
  "gate_proj",
28
+ "down_proj",
29
+ "o_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8b7f363a864ecf4f9ac277a22bdc08c38c4b7d89fd89ff17db290d2d17b5e9cf
3
- size 3023899144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42e5fe428e704cc8508a60c08a3a822e1c6a424a9bb6b4d3701123b4f8e26d11
3
+ size 664584480
special_tokens_map.json CHANGED
@@ -1,23 +1,29 @@
1
  {
2
  "additional_special_tokens": [
3
- {
4
- "content": "<|im_start|>",
5
- "lstrip": false,
6
- "normalized": false,
7
- "rstrip": false,
8
- "single_word": false
9
- },
10
- {
11
- "content": "<|im_end|>",
12
- "lstrip": false,
13
- "normalized": false,
14
- "rstrip": false,
15
- "single_word": false
16
- }
17
  ],
18
- "bos_token": "<|im_start|>",
19
- "eos_token": "<|im_end|>",
20
- "pad_token": "<|im_end|>",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  "unk_token": {
22
  "content": "<unk>",
23
  "lstrip": false,
 
1
  {
2
  "additional_special_tokens": [
3
+ "<start_of_turn>",
4
+ "<end_of_turn>"
 
 
 
 
 
 
 
 
 
 
 
 
5
  ],
6
+ "bos_token": {
7
+ "content": "<bos>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "eos_token": {
14
+ "content": "<eos>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "pad_token": {
21
+ "content": "<pad>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false
26
+ },
27
  "unk_token": {
28
  "content": "<unk>",
29
  "lstrip": false,
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e3e1a1540e64f08c2a8d2bfac6541f1ed3c14f1f09509531b3e09c650d762ae3
3
- size 34363348
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6ce83119bb404f7f0a6e621b76759d476357dcd01241a90f9ca136ae2b3c11c
3
+ size 34362972
tokenizer_config.json CHANGED
@@ -1993,34 +1993,18 @@
1993
  "rstrip": false,
1994
  "single_word": false,
1995
  "special": false
1996
- },
1997
- "256000": {
1998
- "content": "<|im_start|>",
1999
- "lstrip": false,
2000
- "normalized": false,
2001
- "rstrip": false,
2002
- "single_word": false,
2003
- "special": true
2004
- },
2005
- "256001": {
2006
- "content": "<|im_end|>",
2007
- "lstrip": false,
2008
- "normalized": false,
2009
- "rstrip": false,
2010
- "single_word": false,
2011
- "special": true
2012
  }
2013
  },
2014
  "additional_special_tokens": [
2015
- "<|im_start|>",
2016
- "<|im_end|>"
2017
  ],
2018
- "bos_token": "<|im_start|>",
2019
- "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
2020
  "clean_up_tokenization_spaces": false,
2021
- "eos_token": "<|im_end|>",
2022
- "model_max_length": 2048,
2023
- "pad_token": "<|im_end|>",
2024
  "sp_model_kwargs": {},
2025
  "spaces_between_special_tokens": false,
2026
  "tokenizer_class": "GemmaTokenizer",
 
1993
  "rstrip": false,
1994
  "single_word": false,
1995
  "special": false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1996
  }
1997
  },
1998
  "additional_special_tokens": [
1999
+ "<start_of_turn>",
2000
+ "<end_of_turn>"
2001
  ],
2002
+ "bos_token": "<bos>",
2003
+ "chat_template": "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '<start_of_turn>' + role + '\n' + message['content'] | trim + '<end_of_turn>\n' }}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model\n'}}{% endif %}",
2004
  "clean_up_tokenization_spaces": false,
2005
+ "eos_token": "<eos>",
2006
+ "model_max_length": 1000000000000000019884624838656,
2007
+ "pad_token": "<pad>",
2008
  "sp_model_kwargs": {},
2009
  "spaces_between_special_tokens": false,
2010
  "tokenizer_class": "GemmaTokenizer",
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2e61fc641a174ef330f2acb78e514951066939ffc24db43bb1646aef19cf8b2
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ffc68c09366bc5c66183fe5319583650d3e138420f72da3a8736c2a28a9be7e
3
  size 5496