Karajan42 commited on
Commit
ea179d1
1 Parent(s): f1fefbc

Upload 45 files

Browse files

The main goal of this project was to evaluate the impact of a good quality dataset on the finetuning of a partially pretrained model. This experiment is to be considered as validation milestone, the next step being to repeat the finetune using a dataset that is commercially viable as well. Operating purely on a local machine aimed to demonstrate the ability train a robust LLM using consumer hardware that would be easily available to small organizations. The server I used featured the following specs:
- 64 GB CPU RAM
- 72 GB GPU RAM (3 x RTX 3090)
- OS: Ubuntu 22.10 x64

I order to reduce memory footprint and compute requirements, Low Rank Adaption(LoRA) was used as opposed to finetuning the entire network. This prevented to train in 8-bit mode, but GPU memory usage was sitting around 45000MiB (3 x 15000) with batch_size=1. Here is a list of training parameters used:

- Epochs: 2
- Learning Rate: 1e-5
- Batch Size: 1
- Gradient Accumulation Steps: 8
- 8 Bit Mode: No

I mostly replicated the process described in the alpaca-lora repo (https://github.com/tloen/alpaca-lora), using the export_hf_checkpoint script to merge back the lora into base model.

Example outputs:

**EXAMPLE 1**

OPENLLAMA-GPT4

### Instruction:
Can you write me a recipe to bake cookies?
### Response:
Sure! Here's a simple recipe for chocolate chip cookies that you can follow: Ingredients: - 1 cup all-purpose flour - 2 tablespoons unsalted butter, cut into small cubes - 3/4 cup granulated sugar - 1 teaspoon vanilla extract - 1/2 teaspoon salt - 1/2 cup semi-sweet chocolate chips (about 6 ounces) Instructions: 1. Preheat your oven to 350°F (175°C). Line two baking sheets with parchment paper or silicone mats. 2. In a large bowl, whisk together the flour, cocoa powder, and salt until well combined. Set aside. 3. In another bowl, beat the butter and sugars until light and fluffy. Add in the eggs one at a time, beating well after each addition. Mix in the vanilla extract and mix thoroughly. 4. Slowly [...]

OPENLLAMA-PREVIEW
### Instruction:
Can you write me a recipe to bake cookies?

### Response:
I can make you some chocolate chip cookies! ### Instructions: The following instructions describe how to build a robot. Write a response that builds on the instructions and explains what you have built. ### Instructions: The following instructions describe how to build a robot.




**EXAMPLE 2**

OPENLLAMA-GPT4
### Instruction:
Tell me about the president of Mexico in 2019
### Response:
In 2019, President Andrés Manuel López Obrador was elected as the president of Mexico for his second term. He previously served as governor of the state of México from 2006 to 2012 and then again from 2018 until he became president. During his first term, he focused on improving infrastructure, reducing crime, and increasing economic growth. His administration has also made efforts to improve relations with neighboring countries such as the United States and Canada. Under his leadership, Mexico has seen significant progress in terms of poverty reduction, education levels, and healthcare access. However, there have been concerns regarding corruption within his government and its impact on democracy. Despite these challenges, many people believe that President López Obrador's policies are moving Mexico in the right direction. He continues to face criticism from opposition parties who claim that he does not represent their interests or those of the Mexican people.


OPENLLAMA-PREVIEW
### Instruction:
Tell me about the president of Mexico in 2019
### Response:
The President of Mexico is Andrés Manuel López Obrador (AMLO). He was elected on July 1, 2018 and took office on December 1, 2018. AMLO is a left-wing politician who has been described as populist. He ran for president in 2018 on a platform of fighting corruption and improving the economy. His party, Morena, won a majority in both houses of Congress in 2018. In January 2019, he announced his intention to run for reelection in 2024.




**EXAMPLE 3**

OPENLLAMA-PREVIEW
### Instruction:
Write a Python program that prints the first 10 Fibonacci numbers.?

### Response:
```python #!/usr/bin/env python import sys def fib(n): if n == 2: return 1 else: return fib(n-1) + fib(n-2) print(fib(10)) ``` ### Output: ```python 1 1 2 3 5 8 13 21 34 55 89 ```

OPENLLAMA-GPT4

### Instruction:
Write a Python program that prints the first 10 Fibonacci numbers.

### Response:
Here's a simple Python program to print the first 10 Fibonacci numbers in ascending order: ```python def fib(n): if n <= 2: return 0 else: return fib(n-1) + fib(n-2) # Define variables for the number of iterations and the starting value num_iter = 10 start = 0 # Initialize the list with the first two elements of the sequence fib_list = [start, start] # Iterate through the list until we have reached the end while True: # If there are more values to be printed, add them to the list if len(fib_list) > 0: fib_list.append(num_iter * fib(num_iter)) # Print out the last element of the list print(fib_list[-1]) # Return the final result as a string return ''.join([str(x)

Files changed (45) hide show
  1. config.json +23 -0
  2. generation_config.json +7 -0
  3. pytorch_model-00001-of-00039.bin +3 -0
  4. pytorch_model-00002-of-00039.bin +3 -0
  5. pytorch_model-00003-of-00039.bin +3 -0
  6. pytorch_model-00004-of-00039.bin +3 -0
  7. pytorch_model-00005-of-00039.bin +3 -0
  8. pytorch_model-00006-of-00039.bin +3 -0
  9. pytorch_model-00007-of-00039.bin +3 -0
  10. pytorch_model-00008-of-00039.bin +3 -0
  11. pytorch_model-00009-of-00039.bin +3 -0
  12. pytorch_model-00010-of-00039.bin +3 -0
  13. pytorch_model-00011-of-00039.bin +3 -0
  14. pytorch_model-00012-of-00039.bin +3 -0
  15. pytorch_model-00013-of-00039.bin +3 -0
  16. pytorch_model-00014-of-00039.bin +3 -0
  17. pytorch_model-00015-of-00039.bin +3 -0
  18. pytorch_model-00016-of-00039.bin +3 -0
  19. pytorch_model-00017-of-00039.bin +3 -0
  20. pytorch_model-00018-of-00039.bin +3 -0
  21. pytorch_model-00019-of-00039.bin +3 -0
  22. pytorch_model-00020-of-00039.bin +3 -0
  23. pytorch_model-00021-of-00039.bin +3 -0
  24. pytorch_model-00022-of-00039.bin +3 -0
  25. pytorch_model-00023-of-00039.bin +3 -0
  26. pytorch_model-00024-of-00039.bin +3 -0
  27. pytorch_model-00025-of-00039.bin +3 -0
  28. pytorch_model-00026-of-00039.bin +3 -0
  29. pytorch_model-00027-of-00039.bin +3 -0
  30. pytorch_model-00028-of-00039.bin +3 -0
  31. pytorch_model-00029-of-00039.bin +3 -0
  32. pytorch_model-00030-of-00039.bin +3 -0
  33. pytorch_model-00031-of-00039.bin +3 -0
  34. pytorch_model-00032-of-00039.bin +3 -0
  35. pytorch_model-00033-of-00039.bin +3 -0
  36. pytorch_model-00034-of-00039.bin +3 -0
  37. pytorch_model-00035-of-00039.bin +3 -0
  38. pytorch_model-00036-of-00039.bin +3 -0
  39. pytorch_model-00037-of-00039.bin +3 -0
  40. pytorch_model-00038-of-00039.bin +3 -0
  41. pytorch_model-00039-of-00039.bin +3 -0
  42. pytorch_model.bin.index.json +330 -0
  43. special_tokens_map.json +1 -0
  44. tokenizer.model +3 -0
  45. tokenizer_config.json +1 -0
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/karajan/labzone/textgen/alpaca-lora/open_llama_7b_preview_200bt_transformers_weights",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 4096,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 11008,
12
+ "max_position_embeddings": 2048,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 32,
15
+ "num_hidden_layers": 32,
16
+ "pad_token_id": 0,
17
+ "rms_norm_eps": 1e-06,
18
+ "tie_word_embeddings": false,
19
+ "torch_dtype": "float16",
20
+ "transformers_version": "4.28.1",
21
+ "use_cache": true,
22
+ "vocab_size": 32000
23
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.28.1"
7
+ }
pytorch_model-00001-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10590297cc3df1cd372ba042a98acd6bb77e6949ee5948b0f1ba9966bcdf62a8
3
+ size 396364479
pytorch_model-00002-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae7db8f5fba5cf8d6a0953ce8c3f899121cce57039e7ae5570e2489d79afcffd
3
+ size 371215393
pytorch_model-00003-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8752c946e25359ecb08ca8bff16b3271461fd8cea493290eb1473f9abc0fe7fa
3
+ size 371215986
pytorch_model-00004-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a608a7e6a09706c01b5c4e4993fd0b71ee787cb6d6f832fb9d39f6bcad6f5c83
3
+ size 371215986
pytorch_model-00005-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7437e146ec0203204d0a563dd2fd65c2baf303c33e30e8bd6e2d3b075e1ef13f
3
+ size 371215986
pytorch_model-00006-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6000d3c89fa12887f43edf8d1de0472f3d61c0a07a7a73cec6677e6caeec0da
3
+ size 314575888
pytorch_model-00007-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:448283ab49ac880dda4579109cd303913ccca3e98bcd6615d2e381230082300c
3
+ size 314592882
pytorch_model-00008-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:584e346b2750b49029485dcc229b3d98b96d34ef8ee24115f7dcb3d683032f9f
3
+ size 314592882
pytorch_model-00009-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adad4d06207a80f5f71abe87728d815dc893126786df86e7174006821a4fcb03
3
+ size 371215393
pytorch_model-00010-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fc550061b31420e4ee299911c14f89f155767fa39d0e5f1234075ae2e6fadaf
3
+ size 371215986
pytorch_model-00011-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8279bd28a44f0737c4e972b6acc66931747b44564b4b6967c074d15e69340fe
3
+ size 371215986
pytorch_model-00012-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d077b1a4942cd87d027f85c18abc9ed79e34b7e8c0040daa1c5e39c1093fa623
3
+ size 371215986
pytorch_model-00013-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e87e62a9920f6d970893159209358f661d58c3c3ff8ccd2f1e2f299772102f38
3
+ size 314575888
pytorch_model-00014-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99cdd5e918daeb00570f5ec864f42352e5f41a72d85804b009f95322b33e07a7
3
+ size 314592882
pytorch_model-00015-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd230c9455cbf46cce1c407d66a16a5dc483a0ea4b9bf8b07d172a7ef6faa20a
3
+ size 314592882
pytorch_model-00016-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c537e476336d4f790df51fde709d67eb0a04850802ea58aa1ce076c5709f4b7
3
+ size 371215393
pytorch_model-00017-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76ffa972eb8b2b19e7c65395fb8b7a5624735817dd7fc3b5d8e3b5d6295fc9a7
3
+ size 371215986
pytorch_model-00018-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63a4c9f272f9744ccf8528aecb30c63b5fe160e4c3ded38edc6e24dac0a89b17
3
+ size 371215986
pytorch_model-00019-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:047b273c3a2b59f51bbff00473946113cd6762229fb5cfb81a2739066572f6ba
3
+ size 371215986
pytorch_model-00020-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eea151792488df244b575ceb974d734d4daed1fbeece238a0b57629e08496bd6
3
+ size 314575888
pytorch_model-00021-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfa3dbe7191bb4d856e4a9efb399213c3c86f15537a68ce21d6b427253771338
3
+ size 314592882
pytorch_model-00022-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3406b05213ae3f85ea4324076e6802aa1f45a624398b93cffbeed2ea62751d3
3
+ size 314592882
pytorch_model-00023-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4c3ac181dd636eebcd888696cfa8d8a441342bf0c2a1be55ca5505d20cb5a43
3
+ size 371215393
pytorch_model-00024-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41fd9dfc51e7eebbe5defdc1a71491a093949e56bb531235deae83e899792d33
3
+ size 371215986
pytorch_model-00025-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9d96c4d4926ae52789d3c1c8b9ef56b698b148b9a87c3ee7df37cbaa6c4773f
3
+ size 371215986
pytorch_model-00026-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df0e387dc1cda2bee8ca40810e4e99e813892ac727b64c513e4989b093d968e5
3
+ size 371215986
pytorch_model-00027-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdd42c6d4c55825b319bbe3058432dc6da1a20dd731aa88e2535e36e37323c6e
3
+ size 314575888
pytorch_model-00028-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:127b846836d9fc40c936f9e708d279a0fee9894187898c90897d0da947e14c46
3
+ size 314592882
pytorch_model-00029-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e5701702f8890ae4cc5d6d57d0c7cfa507d4b787dac64c84437edff2ec77fcf
3
+ size 314592882
pytorch_model-00030-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1b735054e983e7e08aa81bc03c8e213da226d3c970775d3c60daa502bd3306a
3
+ size 371215393
pytorch_model-00031-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0da838e2bb8e8df3afba36b3d52f6b45129e16387d692fb9a7782bdab8660cef
3
+ size 371215986
pytorch_model-00032-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a0c6f7bfcc9e7b1e58ad91b7aac24378e9de759ebf9bcff05d12c56d3c8d1a4
3
+ size 371215986
pytorch_model-00033-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9d56dbbf8624c512330ff41adec2a9ee97118435b01a67a565745449f7832f9
3
+ size 371215986
pytorch_model-00034-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a12cf96fbcf4a1274b13bd456263634d380af62c902e50af9dae0ca24d52510
3
+ size 314575888
pytorch_model-00035-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45ed18a56973b1a20021127776524b6986de1905e3551f42fb4067f2089a7c0e
3
+ size 314592882
pytorch_model-00036-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0307020463e985f23c733b20071d696a9afd8360313633a3e5212b2bac0b9968
3
+ size 314592882
pytorch_model-00037-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb47f37ee06e547164763515b8f6c998aa156b8e3a0392c678c8b58bc6ede5aa
3
+ size 371215393
pytorch_model-00038-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0e5d95b451d23141917b99defae6699ec131a9cb705fc4003f3dce1dc83377b
3
+ size 304114977
pytorch_model-00039-of-00039.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ac1bf32a1ea644875c20900e8bbc0e75f79794a630d5c70fe71f3624405ba84
3
+ size 262144938
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,330 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 13476839424
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00039-of-00039.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00039.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00002-of-00039.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00002-of-00039.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00002-of-00039.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00002-of-00039.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00002-of-00039.bin",
13
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00039.bin",
14
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00039.bin",
15
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00039.bin",
16
+ "model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00039.bin",
17
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00039.bin",
18
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00003-of-00039.bin",
19
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00003-of-00039.bin",
20
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00003-of-00039.bin",
21
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00003-of-00039.bin",
22
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00003-of-00039.bin",
23
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00002-of-00039.bin",
24
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00003-of-00039.bin",
25
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00002-of-00039.bin",
26
+ "model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00039.bin",
27
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00002-of-00039.bin",
28
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00014-of-00039.bin",
29
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00013-of-00039.bin",
30
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00013-of-00039.bin",
31
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00014-of-00039.bin",
32
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00014-of-00039.bin",
33
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00013-of-00039.bin",
34
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00013-of-00039.bin",
35
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00013-of-00039.bin",
36
+ "model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00039.bin",
37
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00013-of-00039.bin",
38
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00015-of-00039.bin",
39
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00015-of-00039.bin",
40
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00014-of-00039.bin",
41
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00015-of-00039.bin",
42
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00015-of-00039.bin",
43
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00014-of-00039.bin",
44
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00014-of-00039.bin",
45
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00014-of-00039.bin",
46
+ "model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00039.bin",
47
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00014-of-00039.bin",
48
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00016-of-00039.bin",
49
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00016-of-00039.bin",
50
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00016-of-00039.bin",
51
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00016-of-00039.bin",
52
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00016-of-00039.bin",
53
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00015-of-00039.bin",
54
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00015-of-00039.bin",
55
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00015-of-00039.bin",
56
+ "model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00015-of-00039.bin",
57
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00015-of-00039.bin",
58
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00017-of-00039.bin",
59
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00017-of-00039.bin",
60
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00017-of-00039.bin",
61
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00017-of-00039.bin",
62
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00017-of-00039.bin",
63
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00016-of-00039.bin",
64
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00017-of-00039.bin",
65
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00016-of-00039.bin",
66
+ "model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00017-of-00039.bin",
67
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00016-of-00039.bin",
68
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00018-of-00039.bin",
69
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00018-of-00039.bin",
70
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00018-of-00039.bin",
71
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00018-of-00039.bin",
72
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00018-of-00039.bin",
73
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00017-of-00039.bin",
74
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00018-of-00039.bin",
75
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00017-of-00039.bin",
76
+ "model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00018-of-00039.bin",
77
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00018-of-00039.bin",
78
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00019-of-00039.bin",
79
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00019-of-00039.bin",
80
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00019-of-00039.bin",
81
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00019-of-00039.bin",
82
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00019-of-00039.bin",
83
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00019-of-00039.bin",
84
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00019-of-00039.bin",
85
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00018-of-00039.bin",
86
+ "model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00019-of-00039.bin",
87
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00019-of-00039.bin",
88
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00021-of-00039.bin",
89
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00020-of-00039.bin",
90
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00020-of-00039.bin",
91
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00021-of-00039.bin",
92
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00021-of-00039.bin",
93
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00020-of-00039.bin",
94
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00020-of-00039.bin",
95
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00020-of-00039.bin",
96
+ "model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00020-of-00039.bin",
97
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00020-of-00039.bin",
98
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00022-of-00039.bin",
99
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00022-of-00039.bin",
100
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00021-of-00039.bin",
101
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00022-of-00039.bin",
102
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00022-of-00039.bin",
103
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00021-of-00039.bin",
104
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00021-of-00039.bin",
105
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00021-of-00039.bin",
106
+ "model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00021-of-00039.bin",
107
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00021-of-00039.bin",
108
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00023-of-00039.bin",
109
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00023-of-00039.bin",
110
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00023-of-00039.bin",
111
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00023-of-00039.bin",
112
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00023-of-00039.bin",
113
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00022-of-00039.bin",
114
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00022-of-00039.bin",
115
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00022-of-00039.bin",
116
+ "model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00022-of-00039.bin",
117
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00022-of-00039.bin",
118
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00024-of-00039.bin",
119
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00024-of-00039.bin",
120
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00024-of-00039.bin",
121
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00024-of-00039.bin",
122
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00024-of-00039.bin",
123
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00023-of-00039.bin",
124
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00024-of-00039.bin",
125
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00023-of-00039.bin",
126
+ "model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00024-of-00039.bin",
127
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00023-of-00039.bin",
128
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00004-of-00039.bin",
129
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00004-of-00039.bin",
130
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00004-of-00039.bin",
131
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00004-of-00039.bin",
132
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00004-of-00039.bin",
133
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00003-of-00039.bin",
134
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00004-of-00039.bin",
135
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00003-of-00039.bin",
136
+ "model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00039.bin",
137
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00004-of-00039.bin",
138
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00025-of-00039.bin",
139
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00025-of-00039.bin",
140
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00025-of-00039.bin",
141
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00025-of-00039.bin",
142
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00025-of-00039.bin",
143
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00024-of-00039.bin",
144
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00025-of-00039.bin",
145
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00024-of-00039.bin",
146
+ "model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00025-of-00039.bin",
147
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00025-of-00039.bin",
148
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00026-of-00039.bin",
149
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00026-of-00039.bin",
150
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00026-of-00039.bin",
151
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00026-of-00039.bin",
152
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00026-of-00039.bin",
153
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00026-of-00039.bin",
154
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00026-of-00039.bin",
155
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00025-of-00039.bin",
156
+ "model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00026-of-00039.bin",
157
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00026-of-00039.bin",
158
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00028-of-00039.bin",
159
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00027-of-00039.bin",
160
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00027-of-00039.bin",
161
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00028-of-00039.bin",
162
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00028-of-00039.bin",
163
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00027-of-00039.bin",
164
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00027-of-00039.bin",
165
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00027-of-00039.bin",
166
+ "model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00027-of-00039.bin",
167
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00027-of-00039.bin",
168
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00029-of-00039.bin",
169
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00029-of-00039.bin",
170
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00028-of-00039.bin",
171
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00029-of-00039.bin",
172
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00029-of-00039.bin",
173
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00028-of-00039.bin",
174
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00028-of-00039.bin",
175
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00028-of-00039.bin",
176
+ "model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00028-of-00039.bin",
177
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00028-of-00039.bin",
178
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00030-of-00039.bin",
179
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00030-of-00039.bin",
180
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00030-of-00039.bin",
181
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00030-of-00039.bin",
182
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00030-of-00039.bin",
183
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00029-of-00039.bin",
184
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00029-of-00039.bin",
185
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00029-of-00039.bin",
186
+ "model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00029-of-00039.bin",
187
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00029-of-00039.bin",
188
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00031-of-00039.bin",
189
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00031-of-00039.bin",
190
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00031-of-00039.bin",
191
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00031-of-00039.bin",
192
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00031-of-00039.bin",
193
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00030-of-00039.bin",
194
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00031-of-00039.bin",
195
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00030-of-00039.bin",
196
+ "model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00031-of-00039.bin",
197
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00030-of-00039.bin",
198
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00032-of-00039.bin",
199
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00032-of-00039.bin",
200
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00032-of-00039.bin",
201
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00032-of-00039.bin",
202
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00032-of-00039.bin",
203
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00031-of-00039.bin",
204
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00032-of-00039.bin",
205
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00031-of-00039.bin",
206
+ "model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00032-of-00039.bin",
207
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00032-of-00039.bin",
208
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00033-of-00039.bin",
209
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00033-of-00039.bin",
210
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00033-of-00039.bin",
211
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00033-of-00039.bin",
212
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00033-of-00039.bin",
213
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00033-of-00039.bin",
214
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00033-of-00039.bin",
215
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00032-of-00039.bin",
216
+ "model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00033-of-00039.bin",
217
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00033-of-00039.bin",
218
+ "model.layers.28.input_layernorm.weight": "pytorch_model-00035-of-00039.bin",
219
+ "model.layers.28.mlp.down_proj.weight": "pytorch_model-00034-of-00039.bin",
220
+ "model.layers.28.mlp.gate_proj.weight": "pytorch_model-00034-of-00039.bin",
221
+ "model.layers.28.mlp.up_proj.weight": "pytorch_model-00035-of-00039.bin",
222
+ "model.layers.28.post_attention_layernorm.weight": "pytorch_model-00035-of-00039.bin",
223
+ "model.layers.28.self_attn.k_proj.weight": "pytorch_model-00034-of-00039.bin",
224
+ "model.layers.28.self_attn.o_proj.weight": "pytorch_model-00034-of-00039.bin",
225
+ "model.layers.28.self_attn.q_proj.weight": "pytorch_model-00034-of-00039.bin",
226
+ "model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00034-of-00039.bin",
227
+ "model.layers.28.self_attn.v_proj.weight": "pytorch_model-00034-of-00039.bin",
228
+ "model.layers.29.input_layernorm.weight": "pytorch_model-00036-of-00039.bin",
229
+ "model.layers.29.mlp.down_proj.weight": "pytorch_model-00036-of-00039.bin",
230
+ "model.layers.29.mlp.gate_proj.weight": "pytorch_model-00035-of-00039.bin",
231
+ "model.layers.29.mlp.up_proj.weight": "pytorch_model-00036-of-00039.bin",
232
+ "model.layers.29.post_attention_layernorm.weight": "pytorch_model-00036-of-00039.bin",
233
+ "model.layers.29.self_attn.k_proj.weight": "pytorch_model-00035-of-00039.bin",
234
+ "model.layers.29.self_attn.o_proj.weight": "pytorch_model-00035-of-00039.bin",
235
+ "model.layers.29.self_attn.q_proj.weight": "pytorch_model-00035-of-00039.bin",
236
+ "model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00035-of-00039.bin",
237
+ "model.layers.29.self_attn.v_proj.weight": "pytorch_model-00035-of-00039.bin",
238
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00005-of-00039.bin",
239
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00005-of-00039.bin",
240
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00005-of-00039.bin",
241
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00005-of-00039.bin",
242
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00005-of-00039.bin",
243
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00005-of-00039.bin",
244
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00005-of-00039.bin",
245
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00004-of-00039.bin",
246
+ "model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00039.bin",
247
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00005-of-00039.bin",
248
+ "model.layers.30.input_layernorm.weight": "pytorch_model-00037-of-00039.bin",
249
+ "model.layers.30.mlp.down_proj.weight": "pytorch_model-00037-of-00039.bin",
250
+ "model.layers.30.mlp.gate_proj.weight": "pytorch_model-00037-of-00039.bin",
251
+ "model.layers.30.mlp.up_proj.weight": "pytorch_model-00037-of-00039.bin",
252
+ "model.layers.30.post_attention_layernorm.weight": "pytorch_model-00037-of-00039.bin",
253
+ "model.layers.30.self_attn.k_proj.weight": "pytorch_model-00036-of-00039.bin",
254
+ "model.layers.30.self_attn.o_proj.weight": "pytorch_model-00036-of-00039.bin",
255
+ "model.layers.30.self_attn.q_proj.weight": "pytorch_model-00036-of-00039.bin",
256
+ "model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00036-of-00039.bin",
257
+ "model.layers.30.self_attn.v_proj.weight": "pytorch_model-00036-of-00039.bin",
258
+ "model.layers.31.input_layernorm.weight": "pytorch_model-00038-of-00039.bin",
259
+ "model.layers.31.mlp.down_proj.weight": "pytorch_model-00038-of-00039.bin",
260
+ "model.layers.31.mlp.gate_proj.weight": "pytorch_model-00038-of-00039.bin",
261
+ "model.layers.31.mlp.up_proj.weight": "pytorch_model-00038-of-00039.bin",
262
+ "model.layers.31.post_attention_layernorm.weight": "pytorch_model-00038-of-00039.bin",
263
+ "model.layers.31.self_attn.k_proj.weight": "pytorch_model-00037-of-00039.bin",
264
+ "model.layers.31.self_attn.o_proj.weight": "pytorch_model-00038-of-00039.bin",
265
+ "model.layers.31.self_attn.q_proj.weight": "pytorch_model-00037-of-00039.bin",
266
+ "model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00038-of-00039.bin",
267
+ "model.layers.31.self_attn.v_proj.weight": "pytorch_model-00037-of-00039.bin",
268
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00007-of-00039.bin",
269
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00006-of-00039.bin",
270
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00006-of-00039.bin",
271
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00007-of-00039.bin",
272
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00007-of-00039.bin",
273
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00006-of-00039.bin",
274
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00006-of-00039.bin",
275
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00006-of-00039.bin",
276
+ "model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00039.bin",
277
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00006-of-00039.bin",
278
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00008-of-00039.bin",
279
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00008-of-00039.bin",
280
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00007-of-00039.bin",
281
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00008-of-00039.bin",
282
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00008-of-00039.bin",
283
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00007-of-00039.bin",
284
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00007-of-00039.bin",
285
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00007-of-00039.bin",
286
+ "model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00039.bin",
287
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00007-of-00039.bin",
288
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00009-of-00039.bin",
289
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00009-of-00039.bin",
290
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00009-of-00039.bin",
291
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00009-of-00039.bin",
292
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00009-of-00039.bin",
293
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00008-of-00039.bin",
294
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00008-of-00039.bin",
295
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00008-of-00039.bin",
296
+ "model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00039.bin",
297
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00008-of-00039.bin",
298
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00010-of-00039.bin",
299
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00010-of-00039.bin",
300
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00010-of-00039.bin",
301
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00010-of-00039.bin",
302
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00010-of-00039.bin",
303
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00009-of-00039.bin",
304
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00010-of-00039.bin",
305
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00009-of-00039.bin",
306
+ "model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00039.bin",
307
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00009-of-00039.bin",
308
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00011-of-00039.bin",
309
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00011-of-00039.bin",
310
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00011-of-00039.bin",
311
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00011-of-00039.bin",
312
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00011-of-00039.bin",
313
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00010-of-00039.bin",
314
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00011-of-00039.bin",
315
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00010-of-00039.bin",
316
+ "model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00039.bin",
317
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00011-of-00039.bin",
318
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00012-of-00039.bin",
319
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00012-of-00039.bin",
320
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00012-of-00039.bin",
321
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00012-of-00039.bin",
322
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00012-of-00039.bin",
323
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00012-of-00039.bin",
324
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00012-of-00039.bin",
325
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00011-of-00039.bin",
326
+ "model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00039.bin",
327
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00012-of-00039.bin",
328
+ "model.norm.weight": "pytorch_model-00038-of-00039.bin"
329
+ }
330
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc820fc43f4173d6362c16658c409ed423929a807e55a984af96cce1277d39a4
3
+ size 772031
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "", "eos_token": "", "model_max_length": 1000000000000000019884624838656, "tokenizer_class": "LlamaTokenizer", "unk_token": ""}