metadata
license: other
license_name: xt-aurora-license
license_link: LICENSE
language:
- en
- es
tags:
- conversational
- chat
- roleplay
library_name: GGUF
pipeline_tag: text-generation
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
We, XeTute, introduce AURORA V1.0 - the first model in this series which is actually useable. Its usecases are following:
- Next-Word prediction for mobile devices:
- This Model can be reliably packaged into a keyboard-app to help make Next-Word suggestions more accurate.
- Conversations:
- AURORA can engage in conversations using the Vicuna format, remember to replace "ASSISTANT" with "AURORA" though.
- AURORA can engage in SFW roleplay with simple character definitions. It wasn't trained on NSFW.
- AURORA can engage in simple, short Q&A. It was trained on factual data too, which means it performs well for its size.
We used datasets created by our team, and translated it to different languaged patially using HuggingFaceH4/zephyr-7b-beta, mostly using humans we hired from different free-lancing services.
Note:
- All previous beta versions of this series of SLMs were deleted, because almost no downloads were made.
- V1.0 is the last model in this series which will be published, because of too little community activity.
Metadata:
- Name: AURORA
- Version: 1.0
- Author: XeTute
- Size: 1.1B
- Architecture: LaMA, Transformer.
Recommended settings:
- Temperature 0.1 - 0,4 is stable.
- Context Length of 2048(base) to 4096(RoPE) will work well for story-telling, role-playing and simple conversations.
- Output Length: 256 will work very stable, but you can extent to 512. Anything beyond that point is risky, text might become repetitous.
- Chat Format:
{name of your roleplay}: {input}
{name of AURORA's character}: {output}
or,
USER: {input}
AURORA: {output}
We wish you a friendly chat with AURORA.