Edit model card

image/png

gguf quants

Dragonwar 7b - α

The time of the great dragon war is upon us! How many different fantasy novels? One hundred and seventeen you say?

Trained with full text windows, followed by completion, followed by ORPO, followed by one more epoch of the full text, rotated 1/4 in the window. That last train settled everything down and it seems quite coherent.

How to Use

This is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name.

*** Prologue

The sun rose

Authors notes are highly effective. You can use an authors note of something like:

[King Robb Stark and Lord Rahl are at war.]

You have quite a cast of characters to draw from. Perhaps Perrin makes a stop by the Waystone Inn, or Zeddicus and Gandalf have a smoke together.

Settings

I usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2.

Hacks

To get rid of unwanted EOS's, I did the following...

import torch
result_dict : dict[str, torch.Tensor] = model.state_dict()
result_dict['lm_head.weight'][2] = 0
model.state_dict = lambda : result_dict

So now there are no EOS's at all, ever.

Downloads last month
72
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maldv/dragonwar-7b-alpha

Quantizations
1 model

Spaces using maldv/dragonwar-7b-alpha 5