PEFT
English
music
ThatOneShortGuy commited on
Commit
e5b9bb5
1 Parent(s): 93319e3

Upload model

Browse files
Files changed (2) hide show
  1. README.md +2 -50
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -1,54 +1,6 @@
1
  ---
2
  library_name: peft
3
- license: apache-2.0
4
- datasets:
5
- - ThatOneShortGuy/SongLyrics
6
- language:
7
- - en
8
- tags:
9
- - music
10
  ---
11
- # Musical Falcon
12
-
13
- [OpenAssistant/falcon-7b-sft-mix-2000](https://huggingface.co/OpenAssistant/falcon-7b-sft-mix-2000) model fine tuned using PEFT on
14
- [Song Lyrics](https://huggingface.co/datasets/ThatOneShortGuy/SongLyrics) to write lyrics to songs.
15
-
16
- ## Model Details
17
- - **Finetuned from**: [OpenAssistant/falcon-7b-sft-mix-2000]
18
- - **Model Type**: Causal decoder-only transformer language model
19
- - **Language**: English (and limited capabilities in German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
20
- - **License**: Apache 2.0
21
- - **Contact**: Lol don't. This is just for fun.
22
-
23
- ## Usage
24
- The basic basic format getting it in is:
25
- ```python
26
- from peft import PeftModel, PeftConfig
27
- from transformers import AutoModelForCausalLM
28
-
29
- config = PeftConfig.from_pretrained("ThatOneShortGuy/MusicalFalcon")
30
- model = AutoModelForCausalLM.from_pretrained("OpenAssistant/falcon-7b-sft-mix-2000")
31
- model = PeftModel.from_pretrained(model, "ThatOneShortGuy/MusicalFalcon")
32
- ```
33
-
34
- ## Prompting
35
- Considering this comes from [OpenAssistant/falcon-7b-sft-mix-2000](https://huggingface.co/OpenAssistant/falcon-7b-sft-mix-2000), it uses the same structure.
36
- Two special tokens are used to mark the beginning of user and assistant turns: `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
37
- The training prompt used the structure:
38
- ```
39
- <|prompter|>Come up with the lyrics for a song from "{artist}" {"from " + year if year else ""} titled "{title}".<|endoftext|>
40
- <|assistant|>Sure! Here are the lyrics:
41
- {lyrics}
42
- <|endoftext|>
43
- ```
44
-
45
- However, it still seems to work just fine using:
46
- ```
47
- <|prompter|>Write me a song titled "{title}"<|endoftext|><|assistant|>
48
- ```
49
- or any anything of similar nature. Feel free to add a description of the song in there too.
50
-
51
-
52
  ## Training procedure
53
 
54
 
@@ -57,7 +9,7 @@ The following `bitsandbytes` quantization config was used during training:
57
  - load_in_4bit: False
58
  - llm_int8_threshold: 6.0
59
  - llm_int8_skip_modules: None
60
- - llm_int8_enable_fp32_cpu_offload: True
61
  - llm_int8_has_fp16_weight: False
62
  - bnb_4bit_quant_type: fp4
63
  - bnb_4bit_use_double_quant: False
@@ -65,4 +17,4 @@ The following `bitsandbytes` quantization config was used during training:
65
  ### Framework versions
66
 
67
 
68
- - PEFT 0.5.0.dev0
 
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
 
9
  - load_in_4bit: False
10
  - llm_int8_threshold: 6.0
11
  - llm_int8_skip_modules: None
12
+ - llm_int8_enable_fp32_cpu_offload: False
13
  - llm_int8_has_fp16_weight: False
14
  - bnb_4bit_quant_type: fp4
15
  - bnb_4bit_use_double_quant: False
 
17
  ### Framework versions
18
 
19
 
20
+ - PEFT 0.5.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ce5f35b9b245a6d1824dfe6c249cf9f9a9b8934d4022751544f78cc3e08b4a3
3
  size 130641741
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:907acaa533a5aaa4a1221ba83188b69773125eccd2acb91d1c378d3cc2298d6c
3
  size 130641741