File size: 2,258 Bytes
c84fcfd 09ccde3 c84fcfd 36a254e 94a9c07 0cba3b0 c84fcfd 683aa20 afe1ab5 a5ea12e 3f53aeb a5ea12e c84fcfd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
license: other
license_name: xt-aurora-license
license_link: LICENSE
language:
- en
- es
tags:
- conversational
- chat
- roleplay
library_name: GGUF
pipeline_tag: text-generation
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ca8c3c5495933ab066c33c/4iYoUWbwZVIld2K3red-T.png)
We, XeTute, introduce AURORA V1.0 - the first model in this series which is actually useable.
Its usecases are following:
- Next-Word prediction for mobile devices:
- - This Model can be reliably packaged into a keyboard-app to help make Next-Word suggestions more accurate.
- Conversations:
- - AURORA can engage in conversations using the Vicuna format, remember to replace "ASSISTANT" with "AURORA" though.
- - AURORA can engage in SFW roleplay with simple character definitions. It wasn't trained on NSFW.
- - AURORA can engage in simple, short Q&A. It was trained on factual data too, which means it performs well for its size.
We used datasets created by our team, and translated it to different languaged patially using HuggingFaceH4/zephyr-7b-beta, mostly using humans we hired from different free-lancing services.
<a href='https://ko-fi.com/C0C2ZXNON' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
Note:
- All previous beta versions of this series of SLMs were deleted, because almost no downloads were made.
- V1.0 is the last model in this series which will be published, because of too little community activity.
Metadata:
- Name: AURORA
- Version: 1.0
- Author: XeTute
- Size: 1.1B
- Architecture: LaMA, Transformer.
Recommended settings:
- Temperature 0.1 - 0,4 is stable.
- Context Length of 2048(base) to 4096(RoPE) will work well for story-telling, role-playing and simple conversations.
- Output Length: 256 will work very stable, but you can extent to 512. Anything beyond that point is risky, text might become repetitous.
- Chat Format:
```For roleplay:
{name of your roleplay}: {input}
{name of AURORA's character}: {output}
```
or,
```For normal chatting:
USER: {input}
AURORA: {output}
```
We wish you a friendly chat with AURORA. |