HugoVoxx commited on
Commit
2a71681
1 Parent(s): fa38bf3

HugoVoxx/llama-3.2-1b-it-ag

Browse files
README.md CHANGED
@@ -1,57 +1,59 @@
1
  ---
2
  base_model: unsloth/Llama-3.2-1B-Instruct
3
- library_name: transformers
4
- model_name: llama-3.2-1b-it-ag
5
  tags:
6
- - generated_from_trainer
7
  - trl
8
  - sft
9
- licence: license
 
 
 
10
  ---
11
 
12
- # Model Card for llama-3.2-1b-it-ag
 
13
 
14
- This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
18
 
19
- ```python
20
- from transformers import pipeline
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="HugoVoxx/llama-3.2-1b-it-ag", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
27
 
28
- ## Training procedure
29
 
 
30
 
 
31
 
32
- This model was trained with SFT.
33
 
34
- ### Framework versions
 
 
35
 
36
- - TRL: 0.12.0.dev0
37
- - Transformers: 4.45.2
38
- - Pytorch: 2.6.0.dev20240922+cu124
39
- - Datasets: 3.0.1
40
- - Tokenizers: 0.20.0
 
 
 
 
 
 
41
 
42
- ## Citations
43
 
44
 
45
 
46
- Cite TRL as:
47
-
48
- ```bibtex
49
- @misc{vonwerra2022trl,
50
- title = {{TRL: Transformer Reinforcement Learning}},
51
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
- year = 2020,
53
- journal = {GitHub repository},
54
- publisher = {GitHub},
55
- howpublished = {\url{https://github.com/huggingface/trl}}
56
- }
57
- ```
 
1
  ---
2
  base_model: unsloth/Llama-3.2-1B-Instruct
3
+ library_name: peft
4
+ license: llama3.2
5
  tags:
 
6
  - trl
7
  - sft
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: llama-3.2-1b-it-ag
11
+ results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # llama-3.2-1b-it-ag
 
18
 
19
+ This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on an unknown dataset.
20
 
21
+ ## Model description
 
22
 
23
+ More information needed
 
 
 
 
24
 
25
+ ## Intended uses & limitations
26
 
27
+ More information needed
28
 
29
+ ## Training and evaluation data
30
 
31
+ More information needed
32
 
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
 
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
+ - train_batch_size: 2
40
+ - eval_batch_size: 8
41
+ - seed: 42
42
+ - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 16
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 10
47
+ - num_epochs: 5
48
 
49
+ ### Training results
50
 
51
 
52
 
53
+ ### Framework versions
54
+
55
+ - PEFT 0.13.2
56
+ - Transformers 4.45.2
57
+ - Pytorch 2.4.0
58
+ - Datasets 3.0.2
59
+ - Tokenizers 0.20.0
 
 
 
 
 
adapter_config.json CHANGED
@@ -1,34 +1,34 @@
1
- {
2
- "alpha_pattern": {},
3
- "auto_mapping": null,
4
- "base_model_name_or_path": "unsloth/Llama-3.2-1B-Instruct",
5
- "bias": "none",
6
- "fan_in_fan_out": false,
7
- "inference_mode": true,
8
- "init_lora_weights": true,
9
- "layer_replication": null,
10
- "layers_pattern": null,
11
- "layers_to_transform": null,
12
- "loftq_config": {},
13
- "lora_alpha": 16,
14
- "lora_dropout": 0.01,
15
- "megatron_config": null,
16
- "megatron_core": "megatron.core",
17
- "modules_to_save": null,
18
- "peft_type": "LORA",
19
- "r": 8,
20
- "rank_pattern": {},
21
- "revision": null,
22
- "target_modules": [
23
- "q_proj",
24
- "up_proj",
25
- "gate_proj",
26
- "down_proj",
27
- "o_proj",
28
- "v_proj",
29
- "k_proj"
30
- ],
31
- "task_type": "CAUSAL_LM",
32
- "use_dora": false,
33
- "use_rslora": false
34
  }
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "unsloth/Llama-3.2-1B-Instruct",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 256,
14
+ "lora_dropout": 0.01,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 128,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "up_proj",
24
+ "k_proj",
25
+ "o_proj",
26
+ "v_proj",
27
+ "gate_proj",
28
+ "down_proj",
29
+ "q_proj"
30
+ ],
31
+ "task_type": "CAUSAL_LM",
32
+ "use_dora": false,
33
+ "use_rslora": false
34
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4eb3d946af99b14f14080212461b54ccdd4dde9e871839e64b28d9dc1f78f811
3
- size 1073263472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db8a91ccfcd440c96c1304447baba0ddc73541baaa4f6d0346bc9c2d18e5e291
3
+ size 1411430208
runs/Oct23_01-44-10_0df87f83d07d/events.out.tfevents.1729647852.0df87f83d07d.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03a21509782e4b600ef0e00098404fecf9b794e050512d81f7651c1a0991d362
3
+ size 6364
special_tokens_map.json CHANGED
@@ -1,21 +1,21 @@
1
- {
2
- "additional_special_tokens": [
3
- {
4
- "content": "<|im_start|>",
5
- "lstrip": false,
6
- "normalized": false,
7
- "rstrip": false,
8
- "single_word": false
9
- },
10
- {
11
- "content": "<|im_end|>",
12
- "lstrip": false,
13
- "normalized": false,
14
- "rstrip": false,
15
- "single_word": false
16
- }
17
- ],
18
- "bos_token": "<|im_start|>",
19
- "eos_token": "<|im_end|>",
20
- "pad_token": "<|im_end|>"
21
- }
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|im_start|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<|im_end|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ }
17
+ ],
18
+ "bos_token": "<|im_start|>",
19
+ "eos_token": "<|im_end|>",
20
+ "pad_token": "<|im_end|>"
21
+ }
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:510d12ec255f4cb0304aa5428d699c354c1a49696b427a2748a7b03bb7bbb575
3
- size 17210296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5183f5ff3c6a5bb58733e2d242ef948580948862f5c16473564707ff1e5b48e
3
+ size 17210395
tokenizer_config.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:47cf5c33e0553608bd82b8d6cf0d0e5abae883131af6b0a85ecd6a370683779b
3
- size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0ecc28b9d61250a22a4f59bc20e8e210efab66c53df03e3cb30011c8e909453
3
+ size 5496