File size: 1,749 Bytes
76672c9
 
 
 
 
 
 
 
 
 
 
 
 
5a6a433
76672c9
 
 
5a6a433
 
 
76672c9
 
 
5a6a433
76672c9
 
 
5a6a433
76672c9
 
 
5a6a433
76672c9
 
 
5a6a433
 
 
76672c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: gemma-prompt
  results: []
---



# gemma-prompt

This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on 2 datasets- dolly-15k for general knowledge and a curated m-a-p/MusicPile



## Model description

This model is a completed trained model used for music knowledge and prompt automation from musical vibes.

## Intended uses & limitations

Intended use for the model is to have it generate prompts for music that takes into account elements of the surrounding environment, such as the types of buildings nearby, the weather, time of day, and nearby landmarks.

## Training and evaluation data

The datasets used to help train the model are the dolly-15k dataset for general purpose of answering questions and following commands, and a second curated m-a-p/MusicPile data used to fine-tune the model specifically for musical vibe and descriptions of different objects, places, and things. We evaluate our model using a portion of the m-a-p/MusicPile.

## Training procedure

Split dataset from MusicPile to focus on distilled music knowledge
Used dolly for general finetuning

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 888

### Framework versions

- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.0.1a0+cxx11.abi
- Datasets 2.19.0
- Tokenizers 0.19.1