See axolotl config
axolotl version: 0.4.0
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/disk2/alexandria/data/graphs_2_text_hermes.jsonl
type: sharegpt
conversation: chatml
dataset_prepared_path:
val_set_size: 0.0
output_dir: /workspace/disk2/alexandria/models/g2t_hermes/
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project: alexandria
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 0
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
workspace/disk2/alexandria/models/g2t_hermes/
This model is a fine-tuned version of NousResearch/Hermes-2-Pro-Mistral-7B on a version of the Project Alexandria dataset, designed to turn input knowledge graphs structured as Python dictionaries to reconstructed plaintext.
Model description
This is a prototype model; trained quickly as a proof of concept. No hyperparameter tuning or extensive data cleaning has been done besides filtering entries that meet the following criteria:
- Contains a refusal of some sort
- Has an empty input and/or output
- Queries that resulted in an error output
Intended uses & limitations
The model follows a form of ChatML with no system prompt. The model should be prompted as follows:
<|im_start|>user
[Input your knowledge graph structured as a Python dictionary here.]<|im_end|>
<|im_start|>assistant
(Make sure to put a newline after "assistant". Do not include this text in parenthesis in your prompt.)
Greedy sampling is recommended for generating outputs.
No extensive data cleaning has been done. The model may not output a satisfactorily detailed or parsable knowledge graph at times. Since this model is only 7B parameters, certain relationships in the input text may not be properly picked up on by the model. As stated before, this model is a prototype.
Training and evaluation data
The data was generated via. several large language models.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
Training results
Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0
- Downloads last month
- 1
Model tree for TearGosling/mistral_hermes2_alexandria_v0_g2t
Base model
mistralai/Mistral-7B-v0.1