Edit model card

Chatml format. The dataset is about 1400 entries ranging from 8-16k. It's split three ways between long context multi turn chat, long context summarization, and writing analysis. Full fine tune using linear a rope scale factor of 2.0. Trained for five epochs with a learning rate of 1e-5.

Downloads last month
70
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for openerotica/Llama-3-lima-nsfw-16k-test

Quantizations
1 model