File size: 1,504 Bytes
6d186b7 599b95a 6d186b7 18d6cf0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
license: bigscience-bloom-rail-1.0
pipeline_tag: text-generation
library_name: transformers
tags:
- dolly
- bloomz
- Spanish
datasets:
- dvilasuero/databricks-dolly-15k-es-deepl
inference: false
widget:
- text: >-
Below is an instruction that describes a task, paired with an input that
provides further context.
Write a response that appropriately completes the request.
### Instruction:
Tell me about alpacas
language:
- es
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/dolloom/resolve/main/dolloom_logo.png" alt="Alpacoom logo"">
</div>
# DOLLOOM: Dolly 🐑 + BLOOMz 💮
## Adapter Description
This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOMz 7B1** to be fine-tuned on the **Dolly's Dataset (tanslated to Spanish)** by using the method **LoRA**.
## Model Description
Instruction Tuned version of BigScience Large Open-science Open-access Multilingual.
[BLOOMz 7B1 MT](https://huggingface.co/bigscience/bloomz-7b1-mt)
## Training data
TBA
### Supported Tasks and Leaderboards
TBA
### Training procedure
TBA
## How to use
TBA
## Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { dolloom (Revision 599b95a) },
year = 2023,
url = { https://huggingface.co/mrm8488/dolloom },
doi = { 10.57967/hf/0540 },
publisher = { Hugging Face }
}
``` |